Running VCF 9 in a Nested Lab using vSAN ESA on a single host is that not great?

For the deployment I used a server that has 16 cores en 32 cores total and a lot of ram!

Installation on Nested ESX VM’s

For the nested ESX VMs I am using the generic ESX image. In my lab I’m using 4 nested ESX nodes for vSAN ESA (Minimum is needed is 3).

  • Ideally add 24 vCPUs to the nested ESX, but VCF Automation is capable of being deployed with 16 vCPUs
  • Check Expose hardware assisted virtualization to the guest OS under CPU
  • Add a NVME controller and connect disks to this controller if planning to use vSAN ESA (Don’t forget to remove the old Controller)
  • Add at least two VMXNET3 nics connected to the same network
  • Choose VMware ESXi 8.0 or later as the Guest OS version.
  • Change the IDE CD-ROM to SATA ROM (Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0)

Afbeelding met tekst, schermopname, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Config the 4 Host for vSAN ESA

To config the 4 Hosts for vSAN ESA I used my own powercli script where I blogged about here: Config vSAN ESA host or VCF ESA vSAN Host the easy way with Config-VSAN-ESA-VCF-Lab-Host Script

You can download the script HERE!

Installing the Vib works still great. No problem with ESA  Pre-check: ✅

If you have problems check this great blog from William Lam: vSAN ESA Disk & HCL Workaround for VCF 9.0

VCF installer.

At first you have to deploy the VMware Cloud Foundation

The name of the installer is a little off: VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova

Download token

If you don’t have a download token from the Broadcom support then it’s a little complicated. I won’t go into this in depth now but here is a nice article if you have a Synology Nas: VCF 9.0 Offline Depot using Synology

(Note: For any vExpert reading this… You need to claim your own domain name because build your profile is not working with a @hotmail.com or @gmail accounts, Noted by Broadcom)

Afbeelding met tekst, schermopname, Parallel, ontwerpDoor AI gegenereerde inhoud is mogelijk onjuist.

I looks something like this!
Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

10GBE Nic Pre-Check Issue

Next thing to do was: Disable 10GbE NIC Pre-Check in the VCF 9.0 Installer or this should also work: How to change the vmxnet3 link speed of a VM (Not tested)

Afbeelding met tekst, elektronica, schermopname, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Deployment

I have deployed the beta in a earlier state. For deployment I used the same json file to start with.

VCF Automation will deployed automatically that was only the change that I changed it

Afbeelding met tekst, schermopname, nummer, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, Website, ontwerpDoor AI gegenereerde inhoud is mogelijk onjuist.

It’s running!
Afbeelding met tekst, schermopname, scherm, nummerDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, lijn, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Performance running Nested!!

Afbeelding met tekst, lijn, Perceel, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Above the screenshot from the host with de Nested VCF lab deployment it’s quite cpu intensive (2 x CPU 8cores / 16 Threads)

Optimalization

After the deployment I did three things
– On the vSAN ESA Cluster (Disable Auto Disk Claim) (Keep the warning away)
– On the NSX VM reduce the cpu reservation from 6000 🡪 2000 (It helps but not enough)
– VCF automation VM 24 cores to 16 cores.

NSX en VCF automation are really CPU intensive.

I had deployed also two Edge servers but the server did not like that. Edges are also CPU intensive.

I am thinking about adding MS-01 or MS-A2 for splitting some load.

Config vSAN ESA host or VCF ESA vSAN Host the easy way with Config-VSAN-ESA-VCF-Lab-Host Script.

William Lam created the vSAN ESA HCL hardware mock VIB for Nested ESXi.
It works great for vSAN ESA or for VCF vSAN ESA Nested nodes.

You can download the needed vib here

A whale ago I found a easy way to the deployment of a VMware SDDC based on vSAN ESA: VCF automated lab deployment with vSAN ESA

It works great. I a few hours you have a working VCF management domain on vSAN ESA.

Sow what about the script

For the Workload domain domain I created de nested ESXi vm’s manually .
And I did some testing for VCF 9 which I created the nested ESXi host also manually.

Configure by hand takes a lot of time. So I did some scripting and I used some code
from: VCF automated lab deployment with vSAN ESA

So I created a script that does the following:

  • Login on DHCP address en configure a Fixt IP address based on DNS name
  • Disable IPv6
  • Rename the local datastore to a uniek name with the name of the host
  • Configure NTP
  • Install the vSAN ESA VIB mock en restart vSAN mgmt.
  • Generate new certificate for the host with the correct domain name in it.
  • Enable KB372309 (10GB ethernet for vSAN ESA)

It works great for ESXi Nested nodes (ESXi 8.0u3b and ESXi9.X beta)

You can find the script HERE!!

Afbeelding met tekst, schermopname, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Deploying VCF Workload Domain with One NSX Manager

For your VCF homelab you wan to keep the resources small with a little bit overhead.
In this post I will talk about how i managed to deploy a VCF Workload Domain with a single NSX Manager, instead of the standard three nsx nodes.

Warning: Use this only in a Homelab!

The trick is to SSH into your SDDC Manager using the vcf user, and the password used during bring-up of the management domain.

When logged in, run su and log in as root using the password used during bring-up.

run: vi /etc/vmware/vcf/domainmanager/application-prod.properties

Hit i in your keyboard to go into insert mode. Go to the end of the file, and append the following:

nsxt.manager.formfactor=medium
nsxt.manager.resources.validation.skip=true
nsxt.manager.cluster.size=1
nsxt.manager.wait.minutes=120

This will make it so that any workload domain you deploy has one NSX Manager, and that it uses a smaller size. Once done, hit ESC in your keyboard, then type :wq and hit enter to save the file. (w = write, q = quit).

Then run systemctl restart domainmanager and you are good to go!

This worked in my nested Cloud Foundation deployment in my lab running 5.2.1.0.

You will still have to fill in the information for the extra nodes in the UI.

Deploying Windows VMs with Terraform and Packer

A whale ago, I started using Packer for creating Windows Templates for using in my home lab.
I blogged about it here: Windows Server 2025 “Preview” deployment with Packer

So now I want to deploy a template with Terraform, which was created earlier by Packer.

First, we need to ensure we have the Terraform downloaded and the vSphere provider Initialized.

  1. Install Terraform: You can download it from the official Terraform website and follow the installation instructions for your operating system.

I made my life a little easier to create a PowerShell script that downloads the latest version: Download Terraform.ps1

  1. Initialize Your Terraform Configuration: Create a directory for your Terraform configuration files. Inside this directory, create a `main.tf` file (or any `.tf` file) where you will define your provider and other configurations.
    Define the vSphere Provider in Your main.tf: Add the following block to your Terraform configuration file to specify the use of the vSphere provider:

terraform {

  required_providers {

    vsphere = {

      source = “HashiCorp/vsphere”

      version = “> 2.11”

    }

  }

}

provider “vsphere” {

  user           = vcenter_username

  password       = vcenter_password

  vsphere_server = vcenter_server

  # If you have a self-signed cert

  allow_unverified_ssl = true

}

Replace ”vcenter_username”, ” vcenter_password” , ”vcenter_server” with your actual vSphere credentials and server address.

  1. Basic Terraform Commands

Terraform.ps1

# Go to the Terraform download folder

$terraformfolder = ‘d:\automation\terraform\’

Set-Location $terraformfolder

# Download Terraform plugins

.\terraform.exe init

# Test Run

.\terraform.exe plan

# Run Terraform

.\terraform.exe apply

# Clean Up what you created

.\terraform.exe Destroy

Initialize the Terraform Working Directory: Run the following command in your terminal from the directory where your Terraform configuration file is located:

terraform init

Afbeelding met tekst, schermopname, Lettertype, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

This command will download the vSphere provider and initialize your Terraform working directory. It sets up and downloads the necessary provider plugins for Terraform to interact with vSphere.

Verify the Installation: After running `terraform init`, you should see output indicating that the vSphere provider has been successfully installed. You can now proceed to create Terraform configurations to manage your vSphere resources.

Deploy a Template VM within vSphere

In this example, we are deploying a VM running Windows Server 2022.

Terraform Module Structure

  1. variables.tf: Define the variables for the module.
  2. main.tf: Contains the main configuration for the VM.
  3. outputs.tf: Define the outputs for the module.

Variables.tf

variable “vsphere_user” {

  default     = “<your_vcenter_username_here>”

   description = “vSphere username to use to connect to the environment – Default: administrator@vsphere.local”

}

variable “vsphere_password” {

  default     = “<your_vcenter_password_here>”

  description = “vSphere vCenter password”

}

variable “vsphere_server” {

  default = “<your_vcenter_here>”

  description = “vSphere vCenter server”

}

variable “datacenter” {

  default = “<your_datacenter_here>”

  description = “vSphere datacenter”

}

variable “cluster” {

  default = “<your_cluster_here>”

  description = “vSphere cluster”

}

variable “network” {

  default = “<vcenter_network>”

  description = “vSphere network”

}

variable “datastore” {

  default = “<your_destination_datastore_>”

  description = “vSphere datastore”

}

variable “template” {

  default = “<your_template_name_here>”

  description = “Template VM”

}

variable “customization_specifications” {

  default = “<vcenter_customization_specifications>”

  description = “Customization Spec”

}

When you run “.\terraform plan” you do a test run to check your coding

Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, Multimediasoftware, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

When you run “.\terraform apply” build your virtual machine

Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, Lettertype, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Based on: https://vminfrastructure.com/2025/03/11/deploying-a-vm-using-terraform/

It works but it need some tweaking and fixing.

Adding a TPM2.0 security device does not work right now
Server 2025 customization does not work

ESXi Unattend Install on Dell BOSS controller

I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk.

I hate to type a very complex password twice.
So automation is the key.
I love de ks.cfg install option

Sow following the following guide did not the trik:
https://www.dell.com/support/kbdoc/en-us/000177584/automating-operating-system-deployment-to-dell-boss-techniques-for-different-operating-systems

VMware ESXi Automated Install

This did not work:
install –overwritevmfs –firstdisk=”DELLBOSS VD”

After doing a manual install:
Afbeelding met tekst, schermopnameAutomatisch gegenereerde beschrijving

What works:

# For Dell Boss Controller “Dell BOSS-N1″

install –overwritevmfs –firstdisk=”Dell BOSS-N1”

Windows Server 2025 “Preview” deployment with Packer

As Windows Server 2025 Preview is officially released, I wanted to test a  automated build of the Windows Server 2025 Preview release. So that I can deploy this in my home lab and going to test the new features if I can find the time….

About Hashicorp Packer

Hashicorp Packer is a self-contained executable producing quick and easy operating system builds across multiple platforms. Using Packer and a couple of HCL2 files, you can quickly create fully automated template(s) with latest Windows Updates en VMware Tools. When you schedule a fresh builds after patch Tuesday  you have always an up-to-date and fully secured template.

When using VMware customization tools. You can spin up vm’s in minutes.

Automated Windows Server 2025 “Preview” Build

Files you need?
The files and versions I am using at the time of this writing are as follows:

Outside of downloading both Packer and Windows Server 2022 Preview build, you will need the following files:

  • windowsserver2025.auto.pkrvars.hcl – houses the variable values you want to define.
  • windows2025.json.pkr.hcl – the Packer build file
  • Answer file – Generated with Windows System Image Manager (SIM) you can download the file below
  • Custom script file(s) – optional

Other considerations and tasks you will need to complete:

  • Copy the Windows Server 2025 ISO file to a vSphere datastore

Windows Server 2025 unattend Answer file for the automated Packer Build

Like other automated approaches to installing Windows Server, the automated Windows Server 2025 Packer build requires an answer file to provide answers to the GUI automatically and other installation prompts that you normally see in a manual installation of Windows Server.

You will find the scripts here: https://github.com/WardVissers/Packer-Win2025

The only problem that I had was: Switching from Nic from Public to Private

# Set network connections profile to Private mode.

Write-Output ‘Setting the network connection profiles to Private…’

do {

    $connectionProfile = Get-NetConnectionProfile

    Start-Sleep -Seconds 10

} while ($connectionProfile.Name -eq ‘Identifying…’)

Set-NetConnectionProfile -Name $connectionProfile.Name -NetworkCategory Private

VCF 5.0 running inside Nested ESXi server with only 64GB Memory

So I interested to trying to deploy latest release of VMware Cloud Foundation (VCF) 5.0 on my Windows 11 Home PC witch have 128GB and 16 core intel cpu.

William Lee wrote a nice artikel about VMware Cloud Foundation 5.0 running on Intel NUC

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Requirements:

  • VMware Cloud Builder 5.0 OVA (Build 21822418)
  • VCF 5.0 Licenses Through VMUG ADVANTAGE
  • Home PC (Not Special Hardware)
    – 128GB Memory
    – Intel 12600 CPU
    – 4TB of NVME Storage
  • Windows 11 with VMware Workstation 17

Setup

Virtual Machines

  • DC02 (Domain Controller, DNS Server) (4GB 2vcpu)
  • VCF-M01-ESX01 (ESXi 8.0 Update 1a) (64GBGB 1x140GB 2x600NVME 2x NIC) (Every Thin Provisiond)
  • VCF-M01-CB01 (4GB and 4CPU) Only needed through First Deploment

Network settings on my PC

  • 1 IP In my home network
  • 172.16.12.1 (To Fool Cloudbuilder)
  • 172.16.13.1 (To Fool Cloudbuilder)

Procedure:

Install en Configure ESXi

Step 1 – Boot up the ESXi installer from de iso mount and then perform a standard ESXi installation.

Step 2 – Once ESXi is up and running, you will need to minimally configure networking along with an FQDN (ensure proper DNS resolution), NTP and specify which SSD should be used for the vSAN capacity drive. You can use the DCUI to setup the initial networking but recommend switching to ESXi Shell afterwards and finish the require preparations steps as demonstrated in the following ESXCLI commands:

esxcli system ntp set -e true -s pool.ntp.org
esxcli system hostname set –fqdn vcf-m01-esx01.wardvissers.nl

Note: Use vdq -q command to query for the available disks for use with vSAN and ensure there are no partitions residing on the 600GB disks.
Don’t change time server pool.ntp.org.

To ensure that the self-signed TLS certificate that ESXi generates matches that of the FQDN that you had configured, we will need to regenerate the certificate and restart hostd for the changes to go into effect by running the following commands within ESXi Shell:

/bin/generate-certificates
/etc/init.d/hostd restart

Cloudbuilder Config

Step 3 – Deploy the VMware Cloud builder in a separate environment and wait for it to be accessible over the browser. Once CB is online, download the setup_vmware_cloud_builder_for_one_node_management_domain.sh setup script and transfer that to the CB system using the admin user account (root is disabled by default).

Step 4 – Switch to the root user and set the script to have the executable permission and run the script as shown below

su –
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain.sh
./setup_vmware_cloud_builder_for_one_node_management_domain.sh

The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.

A screenshot of a computerDescription automatically generated

Step 4 – Download the example JSON deployment file vcf50-management-domain-example.json and and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0 license keys.

Step 5 – The VMnic in the Cloud Builder VM will acked als a 10GB NIC so I started the deployment not through powershell but normal way in Cloud Builder GUI.

Your deployment time will vary based on your physical resources but it should eventually complete with everything show success as shown in the screenshot below. (I have one retry for finish)
A screenshot of a computerDescription automatically generated A screenshot of a cloud supportDescription automatically generated
Here are some screenshots VCF 5.0 deployment running on my home PC.

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Problems

Check this if you have problems logging in NSX:
https://www.wardvissers.nl/2023/07/26/nsx-endless-spinning-blue-cirle-after-login/

Next Steps.

1. Reploy with use of the Holo-Router https://core.vmware.com/resource/holo-toolkit-20-deploy-router#deploy-holo-router

2. Testing if can deploy Single Host VCF Workload Domain, on same way by following this blog post HERE! 😁
A screenshot of a computerDescription automatically generated

If I can start another 64GB ESXi Server.

Holodeck Toolkit Overview

Holodeck Toolkit 1.3 Overview

The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.

Graphical user interface, applicationDescription automatically generated

Delivering labs in a nested environment solves several challenges with delivering hands-on for a  product like VCF, including:  

  • Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.   
  • Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing.  Each environment needs a single external IP.
  • Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.  
  • Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap. 
  • Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.  
  • Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files.  A typical deployment takes less than 3 hours, with less than 15 min keyboard time.

Nested Environment Overview 

The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod.  VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform. 

Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level.  Each Holodeck pod connects to a unique VSS and Port Group per site.    A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.  

  • Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.  
  • When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1  

The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod: 

  • DNS (local to Site1 and Site2 within the pod, acts as forwarder) 
  • NTP (local to Site1 and Site2 within the pod) 
  • DHCP (local to Site1 and Site2 within the pod) 
  • L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site 
  • BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)

The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.

 Figure 1: Holodeck Nested Diagram

The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.

The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following 

  • Microsoft Windows Server 2019 Desktop Experience with: 
  • Active directory domain “vcf.holo.lab” 
  • DNS Forwarder to Cloud Builder  
  • Certificate Server, Web Enrollment and VMware certificate template 
  • RDP enabled 
  • IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console  
  • Firewall and IE Enhanced security disabled  
  • SDDC Commander custom desktop deployed 
  • Additional software packages deployed and configured 
  • Google Chrome with Holodeck bookmarks 
  • VMware Tools 
  • VMware PowerCLI 
  • VMware PowerVCF 
  • VMware Power Validated Solutions 
  • PuTTY SSH client 
  • VMware OVFtool 
  • Additional software packages copied to Holo-Console for later use 
  • VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder 
  • VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
    • VLC-Holo-Site-1 
    • VLC-Holo-Site-2 
  • VMware vRealize Automation 8.10 Easy Installer

The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts.  They all communicate over the VLC-A-PG Port Group   

Figure 2: Holodeck Nested Hosts

Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.

The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab

 Figure 3: Holodeck Site-2 Diagram

Accessing the Holodeck Environment

User access to the Holodeck pod is via the Holo-Console.  Access to Holo-Console is available via two paths:

VLC Holodeck Deployment Prerequisites 

  • ESXi Host Sizing   
  • Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME 
  • Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME 
  • Best (Four or more pods):  Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME 
  • ESXi Host Configuration: 
  • vSphere 7.0U3 
  • Virtual switch and port group configured with uplinks to customer network/internet  
  • Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance 
  • Multi host clusters are NOT supported
  • Holo-Build host 
  • Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only) 
  • 200GB free disk space 
  • Valid login to https://customerconnect.vmware.com  
  • Entitlement to VCF 4.5 Enterprise for 8 hosts minimum (16 hosts if planning to test Cloud Foundation Multi region with NSX Federation) 
  • License keys for the following VCF 4.5 components
    • VMware Cloud Foundation
    • VMware NSX-T Data Center Enterprise
    • VMware vSAN Enterprise 
    • VMware vSphere Enterprise Plus 
    • VMware vCenter Server (one license)
    • VMware vRealize Suite Advanced or Enterprise
    • Note: This product has been renamed VMware Aria Suite
  • External/Customer networks required
    • ESXi host management IP (one per host) 
    • Holo-Router address per pod

Building NSX-T 3.1 Home lab Step 1

I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.

With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.

In NSX-T 3.11 we got the ability to have the Edge TEP on the same subnet as the hypervisor TEP. A nice write-up of this feature can be found here: https://www.virten.net/2020/11/nsx-t-3-1-enhancement-shared-esxi-and-edge-transport-vlan-with-a-single-uplink/

Lab environment

First let’s have a quick look at the lab environment:

Compute

I have 1 have one ESXi Server Dell Server R730. I use only one nic for Management en Virtual Machine Traffic.

Network

My home network consists of single VLAN

VLAN

Subnet

Role

Virtual Switch

0

192.168.150.0/24

Management/Virtual Machine Traffic

vSwitch0

 Also ensure you enable the required security settings to support nested virtualization:

Virtual Machines

I run a virtualized vSphere 7 Cluster on my host

Afbeelding met tekstAutomatisch gegenereerde beschrijving

The Distributed Virtual Switches are running version 7.0.0 which let’s us deploy NSX-T on the VDS directly.

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Afbeelding met tafelAutomatisch gegenereerde beschrijving

Preparations

Check out the NSX-T Data Center Workflow for vSphere for details and documentation on the process

IP Addresses and DNS records

Before deploying NSX-T in the environment I’ve prepared a few IP addresses and DNS records

Role

IP

NSX Manager

192.168.150.229

NSX-T Edge node 1

192.168.150.227

NSX-T Edge node 2 (currently not in use)

192.168.150.228

NSX-T T0 GW Interface 1

192.168.99.2

Note that I’ve reserved addresses for a second Edge which I’m not going to use at the moment.

Deploy NSX manager appliance

VMware documentation reference

The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.

In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation

Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers

NSX-T deployment

Now let’s get rocking with our NSX-T setup!

We’ll start the NSX manager and prepare it for configuring NSX in the environment

Initial Manager config

After first login I’ll accept the EULA and optionally enable the CEIP

License

Next I’ll add the license.

Add license

Import certificate

Imported certificates

IP Pools

Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)

Afbeelding met tekst, schermafbeelding, monitorAutomatisch gegenereerde beschrijving

TEP pool

Compute Manager

With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.

Best is a specific service account for the connection

Afbeelding met tekst, monitor, schermafbeelding, schermAutomatisch gegenereerde beschrijving

Compute Manager added

Fabric configuration

Now we’re ready for building out our network fabric which will consist of the following:

Transport Zones

Overlay

VLAN

Transport Nodes

ESXi Hosts

Edge VMs

Edge clusters

Take a look at this summary of the Key concepts in NSX-T to learn more about them.

Transport Zone

The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.

The defaults could be used, but I like to create my own.

Afbeelding met tekst, monitor, schermafbeelding, schermAutomatisch gegenereerde beschrijving

Transport Zones

Uplink Profiles

Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.

Again I’m creating my own profile and leave the default profiles be as they are.

Afbeelding met tekst, monitor, zwart, schermafbeeldingAutomatisch gegenereerde beschrijving

Uplink profile

In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.

Transport Node Profile

Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host

In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.

We’ll also add in our newly created Transport Zones

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Creating Transport Node profile

We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks

vCenter View

Creating Transport Node profile

Configure NSX on hosts

With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX

Configure cluster for NSX

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Select profile

After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.

Hosts configuring

After a few minutes our hosts should be configured and ready for NSX

Afbeelding met tekst, schermafbeelding, monitor, schermAutomatisch gegenereerde beschrijving

Hosts configured

Trunk segment

Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).

But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Trunk segment

Edge VM

Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.

Deploy edge VM

Deploy edge VM

Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink

NSX Edge config

Edge cluster

We’ll also create an Edge cluster and add the Edge VM to it

Edge cluster

Summary

Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.

In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.

Hopefully this post can help someone, if not it has at least helped me.

Thanks for reading!

Special thnx for https://rudimartinsen.com/2021/06/29/nsx-t-31-homelab/ for his blog post

Important information before upgrading to vSphere 6.7 (KB53704)

This article provides important documentation and upgrade information that must be reviewed before upgrading to vSphere 6.7.


Resolution


Compatibility considerations

TLS protocols

These products are not compatible with vSphere 6.7 at this time:

  • VMware NSX
  • VMware Integrated OpenStack (VIO)
  • VMware vSphere Integrated Containers (VIC)
  • VMware Horizon

Environments with these products should not be upgraded to vSphere 6.7 at this time. This article and the VMware Product Interoperability Matrixes will be updated when a compatible release is available.

Upgrade Considerations

Before upgrading your environment to vSphere 6.7, review these critical articles to ensure a successful upgrade
For vSphere

Upgrades to vSphere 6.7 are only possible from vSphere 6.0 or vSphere 6.5. If you are currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.

For vCenter Server

For Distributed Virtual Switches

Translate »