Sometimes you want to make a design of something.
A whiteboard is then very handy. In most cases you would to like to use Microsoft Visio.
But on my home pc I don’t have Visio. A free alternative is Excalidraw.
You can draw some thing cool like this: (This for Demo only has no function)
So I interested to trying to deploy latest release of VMware Cloud Foundation (VCF) 5.0 on my Windows 11 Home PC witch have 128GB and 16 core intel cpu.
VCF-M01-CB01 (4GB and 4CPU) Only needed through First Deploment
Network settings on my PC
1 IP In my home network
172.16.12.1 (To Fool Cloudbuilder)
172.16.13.1 (To Fool Cloudbuilder)
Procedure:
Install en Configure ESXi
Step 1 – Boot up the ESXi installer from de iso mount and then perform a standard ESXi installation.
Step 2 – Once ESXi is up and running, you will need to minimally configure networking along with an FQDN (ensure proper DNS resolution), NTP and specify which SSD should be used for the vSAN capacity drive. You can use the DCUI to setup the initial networking but recommend switching to ESXi Shell afterwards and finish the require preparations steps as demonstrated in the following ESXCLI commands:
esxcli system ntp set -e true -s pool.ntp.org
esxcli system hostname set –fqdn vcf-m01-esx01.wardvissers.nl
Note: Use vdq -q command to query for the available disks for use with vSAN and ensure there are no partitions residing on the 600GB disks.
Don’t change time server pool.ntp.org.
To ensure that the self-signed TLS certificate that ESXi generates matches that of the FQDN that you had configured, we will need to regenerate the certificate and restart hostd for the changes to go into effect by running the following commands within ESXi Shell:
Step 3 – Deploy the VMware Cloud builder in a separate environment and wait for it to be accessible over the browser. Once CB is online, download the setup_vmware_cloud_builder_for_one_node_management_domain.sh setup script and transfer that to the CB system using the admin user account (root is disabled by default).
Step 4 – Switch to the root user and set the script to have the executable permission and run the script as shown below
su –
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain.sh
./setup_vmware_cloud_builder_for_one_node_management_domain.sh
The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.
Step 4 – Download the example JSON deployment file vcf50-management-domain-example.json and and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0 license keys.
Step 5 – The VMnic in the Cloud Builder VM will acked als a 10GB NIC so I started the deployment not through powershell but normal way in Cloud Builder GUI.
Your deployment time will vary based on your physical resources but it should eventually complete with everything show success as shown in the screenshot below. (I have one retry for finish)
Here are some screenshots VCF 5.0 deployment running on my home PC.
The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.
Delivering labs in a nested environment solves several challenges with delivering hands-on for a product like VCF, including:
Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.
Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing. Each environment needs a single external IP.
Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.
Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap.
Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.
Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files. A typical deployment takes less than 3 hours, with less than 15 min keyboard time.
Nested Environment Overview
The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod. VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform.
Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level. Each Holodeck pod connects to a unique VSS and Port Group per site. A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.
Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.
When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1
The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod:
DNS (local to Site1 and Site2 within the pod, acts as forwarder)
NTP (local to Site1 and Site2 within the pod)
DHCP (local to Site1 and Site2 within the pod)
L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site
BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)
The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.
Figure 1: Holodeck Nested Diagram
The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.
The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following
Microsoft Windows Server 2019 Desktop Experience with:
Active directory domain “vcf.holo.lab”
DNS Forwarder to Cloud Builder
Certificate Server, Web Enrollment and VMware certificate template
RDP enabled
IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console
Firewall and IE Enhanced security disabled
SDDC Commander custom desktop deployed
Additional software packages deployed and configured
Google Chrome with Holodeck bookmarks
VMware Tools
VMware PowerCLI
VMware PowerVCF
VMware Power Validated Solutions
PuTTY SSH client
VMware OVFtool
Additional software packages copied to Holo-Console for later use
VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder
VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
VLC-Holo-Site-1
VLC-Holo-Site-2
VMware vRealize Automation 8.10 Easy Installer
The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts. They all communicate over the VLC-A-PG Port Group
Figure 2: Holodeck Nested Hosts
Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.
The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab
Figure 3: Holodeck Site-2 Diagram
Accessing the Holodeck Environment
User access to the Holodeck pod is via the Holo-Console. Access to Holo-Console is available via two paths:
Microsoft Remote Desktop Protocol (RDP) connection to the external IP of the Holo-Router. Holo-Router is configured to forward all RDP traffic to the instance of Holo-Console inside the pod.
Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME
Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME
Best (Four or more pods): Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME
ESXi Host Configuration:
vSphere 7.0U3
Virtual switch and port group configured with uplinks to customer network/internet
Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance
Multi host clusters are NOT supported
Holo-Build host
Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only)
I had a frustrating issue with Packer, specifically with VMware Tools installation.
During the Packer install, I load up a script and have VMware Tools 12.1.5 installed. It seems to install successfully, But I noticed that the VMTools service is not running. I have to re-run setup64.exe via the GUI and do a repair, then I see the service exist and runs, and Packer can discover the IP address of the VM to finish it.
The Solution
I used a older autounattend.xml which i never checked the time zone.
Setting the correcting time zone the trick:
Virtual Machine with Windows Server 2022 with KB5022842 (Feb 2023) installed en configured with secure boot enabled will not boot up on vSphere 7 unless updated to 7.0u3k (vSphere 8 not affected)
In VM vmware.log, there is ‘Image DENIED’ info like the below:
2023-02-15T05:34:31.379Z In(05) vcpu-0 – SECUREBOOT: Signature: 0 in db, 0 in dbx, 1 unrecognized, 0 unsupported alg.
2023-02-15T05:34:31.379Z In(05) vcpu-0 – Hash: 0 in db, 0 in dbx.
2023-02-15T05:34:31.379Z In(05) vcpu-0 – SECUREBOOT: Image DENIED.
To identify the location of vmware.log files:
Establish an SSH session to your host. For ESXi hosts
Log in to the ESXi Host CLI using root account.
To list the locations of the configuration files for the virtual machines registered on the host, run the below command:
#vim-cmd vmsvc/getallvms | grep -i “VM_Name”
The vmware.log file is located in virtual machine folder along with the vmx file.
Record the location of the .vmx configuration file for the virtual machine you are troubleshooting. For example:
If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.0 Update 3k, install KB5022842, and the VM boots properly without any additional steps required.
With all the Fabric configuration done we can test our setup.
I’m creating two overlay segments in NSX connected to a Tier-1 gateway, and after that we’ll create a Tier-0 gateway and connect the T1 gateway to it to get North/South connectivity to the overlay resources
Two VMs will be deployed, one VM in each of the two overlay segments
Create a Tier-1 Gateway
The Tier-1 Gateway will initially not be connected to a Tier-0 Gateway (I haven’t configured a T0 gw yet) or an Edge Cluster.
Tier-1 Gateway
Create Logical segments
We need two logical segments, both using the Overlay Transport Zone. I’m defining different subnets on them, 10.0.1.0/24 and 10.0.2.0/24.
Segments
Add VMs to Logical segments
We have two Photon VMs which should be added to the logical segments.
Two Photon VMs
Test connectivity
Now let’s verify that the two VMs can ping each other
Don’t forget to enable the echo rule on the Windows Firewall….
Connectivity test
This shows that the overlay is working, and note again that the Edge VMs are not in use here.
External connectivity
Traffic is flowing between VMs running on Logical segments inside the NSX-T environment, but what if we want to reach something outside, or reach a VM inside a NSX-T overlay?
Then we need to bring a Tier-0 Gateway in to the mix.
The T-0 gateway can be configured with Uplinks that are connected to the physical network. This is done through a segment which can reach the physical network, normally through a VLAN.
To configure the uplink interfaces we need to have Edge VMs so finally we get to bring those into play as well.
Create segment for uplinks
First I’ll create a segment mapped to VLAN 99 in my lab. Note that I select the VLAN transport zone, and I do not connect the segment to a gateway
Create Uplink VLAN segment
Create Tier-0 gateway
Now we’ll create a Tier-0 gateway, note that I now also select my Edge cluster.
Create T0 gateway
Static route
To be able to forward traffic out of the NSX-T environment the T0 gateway needs to know where to send queries for IPs it doesn’t control. Normally you would want to configure a routing protocol like BGP or OSPF so that the T0 gateway could exchange routes with the physical router(s) in your network.
I’ve not set up BGP or any other routing protocol on my physical router, so I’ve just configured a default static route that forwards to my physical router. The next hop is set to the gateway address for the Uplink VLAN 99, 192.168.99.1
Static route
Link T1 gateway to T0 gateway
We’ve done a lot of configuring now, but still we’ve not got connectivity in or out for our VMs. The final step is to connect the Tier-1 gateway to the Tier-0 gateway, and we’ll also activate Route Advertisement of Connected Segments and Service Ports
Tier-1 Gateway
Test connectivity
Verify North/South connectivity
Yes!
Test Distributed Firewall
Let’s also do a quick test of the Distributed Firewall feature in NSX-T.
First we’ll create a rule blocking ICMP (ping) from any to my test vm and publish the rule
ICMP firewall rule
Now let’s test pinging from from my pc to nested Windows 2016 server. With the rule not enabled en enabled.
Ping blocked
Summary
Hopefully this post can help someone, if not it has at least helped me.
Now we have working environment so we can go testing some things.
Also scripting/automation against a nsx environment I will look in to!
You must be logged in to post a comment.