At first you have to deploy the VMware Cloud Foundation
The name of the installer is a little off: VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova
Download token
If you don’t have a download token from the Broadcom support then it’s a little complicated. I won’t go into this in depth now but here is a nice article if you have a Synology Nas: VCF 9.0 Offline Depot using Synology
(Note: For any vExpert reading this… You need to claim your own domain name because build your profile is not working with a @hotmail.com or @gmail accounts, Noted by Broadcom)
I have deployed the beta in a earlier state. For deployment I used the same json file to start with.
VCF Automation will deployed automatically that was only the change that I changed it
It’s running!
Performance running Nested!!
Above the screenshot from the host with de Nested VCF lab deployment it’s quite cpu intensive (2 x CPU 8cores / 16 Threads)
Optimalization
After the deployment I did three things
– On the vSAN ESA Cluster (Disable Auto Disk Claim) (Keep the warning away)
– On the NSX VM reduce the cpu reservation from 6000 🡪 2000 (It helps but not enough)
– VCF automation VM 24 cores to 16 cores.
NSX en VCF automation are really CPU intensive.
I had deployed also two Edge servers but the server did not like that. Edges are also CPU intensive.
I am thinking about adding MS-01 or MS-A2 for splitting some load.
By passing either of the new VCP-VCF level certification exam(s), anyone maintaining an active VMUG Advantage membership can receive 3 years worth of extensive VMware Cloud Foundation licensing for home lab use!
The VMUG Advantage program has offered affordable home lab VMware licensing packages for years, but did cover most over the entire product portfolio.
Last year Broadcom made a change into this.
Option 1: Get vSphere Standard Edition 32 cores for 1 year: Pass one of the following VCP certification exams
VCP-VVF (admin/architect)
VCP-VCF (admin/architect)
Option 2: Get VMware Cloud Foundation (VCF) 128 cores for 3 years: Purchase & Maintain VMUG Advantage, pass the following VCP certification exam.
VCP-VCF (admin/architect)
A VMUG Advantage membership was complimentary for vExperts in 2025
The membership is $210 otherwise, and does include a voucher for a 50% discounted VCP-VCF exam
Addons: PerformanceServiceEnabled PerformanceStatsStoragePolicy faultdomaincoun StretchedClusterEnabled vSanFailureToTolerate (Works only in Second run, Work in Progress)
You schedule the script and send it to your e-mail
I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk. After a correct install vSAN did not see this this disk ready for use for vSAN. Combining the following articles Dell VXRai vSAN Drives ineligible and identify-and-solve-ineligible-disk-problems-in-virtual-san/
I solved this problem with the following steps:
Step 1: Identify the Disk with vdq -qH
Step 2: Use partedUtil get “/dev/disks/<DISK>” to list all partitions:
partedUtil get “/dev/disks/t10.NVMe____Dell_Ent_NVMe_CM6_MU_3.2TB______________017D7D23E28EE38C”
Step 3: Use This disk has 2 partitions. Use the partedUtil delete “/dev/disks/<DISK>” <PARTITION> command to delete all partitions:
Installing Features on Demand through Powerschell contains a bug. You may see “failure to download files”, “cannot download”, or errors like “0x800F0954” or file not found.
To Solve that I created I powerschell script to run the install twice: featuresondemand.ps1
So I interested to trying to deploy latest release of VMware Cloud Foundation (VCF) 5.0 on my Windows 11 Home PC witch have 128GB and 16 core intel cpu.
VCF-M01-CB01 (4GB and 4CPU) Only needed through First Deploment
Network settings on my PC
1 IP In my home network
172.16.12.1 (To Fool Cloudbuilder)
172.16.13.1 (To Fool Cloudbuilder)
Procedure:
Install en Configure ESXi
Step 1 – Boot up the ESXi installer from de iso mount and then perform a standard ESXi installation.
Step 2 – Once ESXi is up and running, you will need to minimally configure networking along with an FQDN (ensure proper DNS resolution), NTP and specify which SSD should be used for the vSAN capacity drive. You can use the DCUI to setup the initial networking but recommend switching to ESXi Shell afterwards and finish the require preparations steps as demonstrated in the following ESXCLI commands:
esxcli system ntp set -e true -s pool.ntp.org
esxcli system hostname set –fqdn vcf-m01-esx01.wardvissers.nl
Note: Use vdq -q command to query for the available disks for use with vSAN and ensure there are no partitions residing on the 600GB disks.
Don’t change time server pool.ntp.org.
To ensure that the self-signed TLS certificate that ESXi generates matches that of the FQDN that you had configured, we will need to regenerate the certificate and restart hostd for the changes to go into effect by running the following commands within ESXi Shell:
Step 3 – Deploy the VMware Cloud builder in a separate environment and wait for it to be accessible over the browser. Once CB is online, download the setup_vmware_cloud_builder_for_one_node_management_domain.sh setup script and transfer that to the CB system using the admin user account (root is disabled by default).
Step 4 – Switch to the root user and set the script to have the executable permission and run the script as shown below
su –
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain.sh
./setup_vmware_cloud_builder_for_one_node_management_domain.sh
The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.
Step 4 – Download the example JSON deployment file vcf50-management-domain-example.json and and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0 license keys.
Step 5 – The VMnic in the Cloud Builder VM will acked als a 10GB NIC so I started the deployment not through powershell but normal way in Cloud Builder GUI.
Your deployment time will vary based on your physical resources but it should eventually complete with everything show success as shown in the screenshot below. (I have one retry for finish)
Here are some screenshots VCF 5.0 deployment running on my home PC.
The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.
Delivering labs in a nested environment solves several challenges with delivering hands-on for a product like VCF, including:
Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.
Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing. Each environment needs a single external IP.
Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.
Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap.
Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.
Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files. A typical deployment takes less than 3 hours, with less than 15 min keyboard time.
Nested Environment Overview
The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod. VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform.
Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level. Each Holodeck pod connects to a unique VSS and Port Group per site. A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.
Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.
When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1
The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod:
DNS (local to Site1 and Site2 within the pod, acts as forwarder)
NTP (local to Site1 and Site2 within the pod)
DHCP (local to Site1 and Site2 within the pod)
L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site
BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)
The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.
Figure 1: Holodeck Nested Diagram
The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.
The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following
Microsoft Windows Server 2019 Desktop Experience with:
Active directory domain “vcf.holo.lab”
DNS Forwarder to Cloud Builder
Certificate Server, Web Enrollment and VMware certificate template
RDP enabled
IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console
Firewall and IE Enhanced security disabled
SDDC Commander custom desktop deployed
Additional software packages deployed and configured
Google Chrome with Holodeck bookmarks
VMware Tools
VMware PowerCLI
VMware PowerVCF
VMware Power Validated Solutions
PuTTY SSH client
VMware OVFtool
Additional software packages copied to Holo-Console for later use
VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder
VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
VLC-Holo-Site-1
VLC-Holo-Site-2
VMware vRealize Automation 8.10 Easy Installer
The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts. They all communicate over the VLC-A-PG Port Group
Figure 2: Holodeck Nested Hosts
Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.
The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab
Figure 3: Holodeck Site-2 Diagram
Accessing the Holodeck Environment
User access to the Holodeck pod is via the Holo-Console. Access to Holo-Console is available via two paths:
Microsoft Remote Desktop Protocol (RDP) connection to the external IP of the Holo-Router. Holo-Router is configured to forward all RDP traffic to the instance of Holo-Console inside the pod.
Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME
Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME
Best (Four or more pods): Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME
ESXi Host Configuration:
vSphere 7.0U3
Virtual switch and port group configured with uplinks to customer network/internet
Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance
Multi host clusters are NOT supported
Holo-Build host
Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only)
In March 2020, Microsoft is going to release a update which will essentially disable the use of unsigned LDAP which will be the default. This means that you can no longer use bindings or services which binds to domain controllers over unsigned ldap on port 389. You can either use LDAPS over port 636 or using StartTLS on port 389 but it still requires that you addd a certificate to your domain controllers. This hardening can be done manually until the release of the security update that will enable these settings by default.
How to add signed LDAPS to your domain controllers
After the change the following features will be supported against Active Directory.
How will this affect my enviroment?
Clients that rely on unsigned SASL (Negotiate, Kerberos, NTLM, or Digest) LDAP binds or on LDAP simple binds over a non-SSL/TLS connection stop working after you make this configuration change. This also applies for 3.party solutions which rely on LDAP such as Citrix NetScaler/ADC or other Network appliances, Vault and or authentication mechanisms also rely on LDAP. If you haven’t fixed this it will stop working. This update will apply for all versions.
Windows Server 2008 SP2, Windows 7 SP1, Windows Server 2008 R2 SP1, Windows Server 2012, Windows 8.1, Windows Server 2012 R2, Windows 10 1507, Windows Server 2016, Windows 10 1607, Windows 10 1703, Windows 10 1709, Windows 10 1803, Windows 10 1809, Windows Server 2019, Windows 10 1903, Windows 10 1909
How to check if something is using unsigned LDAP?
If the directory server is configured to reject unsigned SASL LDAP binds or LDAP simple binds over a non-SSL/TLS connection, the directory server will log a summary under eventid 2888 one time every 24 hours when such bind attempts occur. Microsoft advises administrators to enable LDAP channel binding and LDAP signing as soon as possible before March 2020 to find and fix any operating systems, applications or intermediate device compatibility issues in their environment.
New features available on VMware vSphere PowerCLI 11.0 is to support the new all updates and release of VMware products , find the below following has been features,
New Security module
vSphere 6.7 Update 1
NSX-T 2.3
Horizon View 7.6
vCloud Director 9.5
Host Profiles – new cmdlets for interacting with
New Storage Module updates
NSX-T in VMware Cloud on AWS
Cloud module multiplatform support
Get-ErrorReport cmdlet has been updated
PCloud module has been removed
HA module has been removed
Now we will go through above mentioned new features to find what functionality it bring to PowerCLI 11.0
What is PowerCLI 11.0 New Security Module
The new security module brings more powerful automation features to PowerCLI 11.0 available new cmdlets include the following
Get-SecurityInfo
Get-VTpm
Get-VTpmCertificate
Get-VTpmCSR
New-VTpm
Remove-VTpm
Set-VTpm
Unlock-VM
Also New-VM cmdlet has enhanced functionality with the security module functionality and it includes parameters like KmsCluster, StoragePolicy, SkipHardDisks etc which can be used while creating new virtual machines with PowerCLI .In addition to that Set-VM, Set-VMHost, Set-HardDisk, and New-HardDisk cmdlets are added.
Host Profile Additions
There are few additions to the VMware.VimAutomation.Core module that will make managing host profiles from PowerCLI
Get-VMHostProfileUserConfiguration
Set-VMHostProfileUserConfiguration
Get-VMHostProfileStorageDeviceConfiguration
Set-VMHostProfileStorageDeviceConfiguration
Get-VMHostProfileImageCacheConfiguration
Set-VMHostProfileImageCacheConfiguration
Get-VMHostProfileVmPortGroupConfiguration
Set-VMHostProfileVmPortGroupConfiguration
Storage Module Updates
These new Storage Module updates specifically for VMware vSAN , the updates has predefined time ranges when using Get-VsanStat. In addition Get-VsanDisk has additional new properites that are returned including capacity, used percentage, and reserved percentage. Following are the cmdlets have been added to automate vSAN
Get-VsanObject
Get-VsanComponent
Get-VsanEvacuationPlan – provides information regarding bringing a host into maintenance mode and the impact of the operation on the data, movement, etc
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Google Analytics is a powerful tool that tracks and analyzes website traffic for informed marketing decisions.
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.
90 days
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_ga
ID used to identify users
2 years
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_gid
ID used to identify users for 24 hours after last activity
24 hours
_gat
Used to monitor number of Google Analytics server requests when using Google Tag Manager
You must be logged in to post a comment.