How to update standalone ESX(i) server

 Sometimes I write stuff for myself and publish it as a archive piece, so as this one.

For 8.0U3

 esxcli software sources profile list -d https://dl.broadcom.com/<Your_Broadcom_Download_Token>/PROD/COMP/ESX_HOST/main/vmw-depot-index.xml | grep -i ESXi-8.0U3

Afbeelding met schermopname, tekstDoor AI gegenereerde inhoud is mogelijk onjuist.

For ESX(i) 9

esxcli software sources profile list -d https://dl.broadcom.com/<Your_Broadcom_Download_Token>/PROD/COMP/ESX_HOST/main/vmw-depot-index.xml | grep -i ESXi-9

If you got a memory error run the following commands

If you do not get a list of versions run the following commands :

esxcli system settings advanced set -o /VisorFS/VisorFSPristineTardisk -i 0
cp /usr/lib/vmware/esxcli-software /usr/lib/vmware/esxcli-software.bak
sed -i ‘s/mem=300/mem=500/g’ /usr/lib/vmware/esxcli-software.bak
mv /usr/lib/vmware/esxcli-software.bak /usr/lib/vmware/esxcli-software -f
esxcli system settings advanced set -o /VisorFS/VisorFSPristineTardisk -i 1

Source: https://williamlam.com/2024/03/quick-tip-using-esxcli-to-upgrade-esxi-8-x-throws-memoryerror-or-got-no-data-from-process.html

Install latest Update (ESXi-8.0U3g-24859861-standard)

esxcli software profile update -p ESXi-8.0U3g-24859861-standard -d https://dl.broadcom.com/<Your_Broadcom_Download_Token>/PROD/COMP/ESX_HOST/main/vmw-depot-index.xml

Running VCF 9 in a Nested Lab using vSAN ESA on a single host is that not great?

For the deployment I used a server that has 16 cores en 32 cores total and a lot of ram!

Installation on Nested ESX VM’s

For the nested ESX VMs I am using the generic ESX image. In my lab I’m using 4 nested ESX nodes for vSAN ESA (Minimum is needed is 3).

  • Ideally add 24 vCPUs to the nested ESX, but VCF Automation is capable of being deployed with 16 vCPUs
  • Check Expose hardware assisted virtualization to the guest OS under CPU
  • Add a NVME controller and connect disks to this controller if planning to use vSAN ESA (Don’t forget to remove the old Controller)
  • Add at least two VMXNET3 nics connected to the same network
  • Choose VMware ESXi 8.0 or later as the Guest OS version.
  • Change the IDE CD-ROM to SATA ROM (Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0)

Afbeelding met tekst, schermopname, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Config the 4 Host for vSAN ESA

To config the 4 Hosts for vSAN ESA I used my own powercli script where I blogged about here: Config vSAN ESA host or VCF ESA vSAN Host the easy way with Config-VSAN-ESA-VCF-Lab-Host Script

You can download the script HERE!

Installing the Vib works still great. No problem with ESA  Pre-check: ✅

If you have problems check this great blog from William Lam: vSAN ESA Disk & HCL Workaround for VCF 9.0

VCF installer.

At first you have to deploy the VMware Cloud Foundation

The name of the installer is a little off: VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova

Download token

If you don’t have a download token from the Broadcom support then it’s a little complicated. I won’t go into this in depth now but here is a nice article if you have a Synology Nas: VCF 9.0 Offline Depot using Synology

(Note: For any vExpert reading this… You need to claim your own domain name because build your profile is not working with a @hotmail.com or @gmail accounts, Noted by Broadcom)

Afbeelding met tekst, schermopname, Parallel, ontwerpDoor AI gegenereerde inhoud is mogelijk onjuist.

I looks something like this!
Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

10GBE Nic Pre-Check Issue

Next thing to do was: Disable 10GbE NIC Pre-Check in the VCF 9.0 Installer or this should also work: How to change the vmxnet3 link speed of a VM (Not tested)

Afbeelding met tekst, elektronica, schermopname, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Deployment

I have deployed the beta in a earlier state. For deployment I used the same json file to start with.

VCF Automation will deployed automatically that was only the change that I changed it

Afbeelding met tekst, schermopname, nummer, softwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, Website, ontwerpDoor AI gegenereerde inhoud is mogelijk onjuist.

It’s running!
Afbeelding met tekst, schermopname, scherm, nummerDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, lijn, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Performance running Nested!!

Afbeelding met tekst, lijn, Perceel, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Above the screenshot from the host with de Nested VCF lab deployment it’s quite cpu intensive (2 x CPU 8cores / 16 Threads)

Optimalization

After the deployment I did three things
– On the vSAN ESA Cluster (Disable Auto Disk Claim) (Keep the warning away)
– On the NSX VM reduce the cpu reservation from 6000 🡪 2000 (It helps but not enough)
– VCF automation VM 24 cores to 16 cores.

NSX en VCF automation are really CPU intensive.

I had deployed also two Edge servers but the server did not like that. Edges are also CPU intensive.

I am thinking about adding MS-01 or MS-A2 for splitting some load.

Config vSAN ESA host or VCF ESA vSAN Host the easy way with Config-VSAN-ESA-VCF-Lab-Host Script.

William Lam created the vSAN ESA HCL hardware mock VIB for Nested ESXi.
It works great for vSAN ESA or for VCF vSAN ESA Nested nodes.

You can download the needed vib here

A whale ago I found a easy way to the deployment of a VMware SDDC based on vSAN ESA: VCF automated lab deployment with vSAN ESA

It works great. I a few hours you have a working VCF management domain on vSAN ESA.

Sow what about the script

For the Workload domain domain I created de nested ESXi vm’s manually .
And I did some testing for VCF 9 which I created the nested ESXi host also manually.

Configure by hand takes a lot of time. So I did some scripting and I used some code
from: VCF automated lab deployment with vSAN ESA

So I created a script that does the following:

  • Login on DHCP address en configure a Fixt IP address based on DNS name
  • Disable IPv6
  • Rename the local datastore to a uniek name with the name of the host
  • Configure NTP
  • Install the vSAN ESA VIB mock en restart vSAN mgmt.
  • Generate new certificate for the host with the correct domain name in it.
  • Enable KB372309 (10GB ethernet for vSAN ESA)

It works great for ESXi Nested nodes (ESXi 8.0u3b and ESXi9.X beta)

You can find the script HERE!!

Afbeelding met tekst, schermopname, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, nummer, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

How to get Aria Operations (Skyline) Diagnostics working

On the 4th Oktober VMware Skyline was end of life.

Afbeelding met tekst, schermopname, elektronica, stroomkringAutomatisch gegenereerde beschrijving

VMware Skyline was great:
• Proactive Issue Identification
• Automated Insights
• Health Scans and Remediation
• Integration with support

VMware by Broadcom are building critical Findings and Self-Help recommendations directly in product starting with VCF (from 5.2) and Aria Operations (from v8.18 July 2024)

Many of the other Skyline features are being planned for inclusion in future  releases in Cloud Foundation and Aria Operations. We will see what the future will bring.
But for now how do you get this working.

First Step:

Update Aria Operations to 8.18.2 (Lastest)

Second Steps:

1. vCenter (Don’t for get enable vSAN), NSX, VCF, Aria vRA

2. Configure log collection in Aria Logs for the following components:

• Configure vCenter server integration in Aria for Logs

• Configure log forwarding on vCenter server, ESXi hosts(automatically in Aria FOR logs), and SDDC manager

3. Integrating VMware Aria Operations for Logs and VMware Aria Operations

4. Connect Skyline Health Diagnostics (SHD)

5. In Aria LoginSight check the vRops integration checkboxes

Bij default Enable launch in context can be disabled when configured at first.

After upgrading and checking the settings its finally working 😊 (It can take some time).
Afbeelding met schermopname, Multimediasoftware, Grafische software, softwareAutomatisch gegenereerde beschrijving

How to Remove Inaccessible vSAN Objects in vSphere: Step-by-Step Guide

This post is about how to remove such an inaccessible object within vSAN.

Afbeelding met tekst, schermopname, nummer, LettertypeAutomatisch gegenereerde beschrijving

Open an SSH session to the vCenter and enter the command rvc localhost in the command line.

Navigate to the destinated vSAN cluster where you want to remove the inaccessible objects using cd and utilize ls to list in each step like this one:

Verify the state of vSAN objects using the command vsan.check_state -r . This check involves three steps:

  • Checking for inaccessible vSAN objects
  • Verifying invalid or inaccessible VMs, and
  • Checking for VMs for which VC/hostd/vmx are out of sync

During this check, as you can see in the next screenshot, there are four inaccessible objects with the same UUID as those listed in Virtual Objects within the vSphere Client.

Afbeelding met tekst, schermopname, LettertypeAutomatisch gegenereerde beschrijving

To remove them, open an SSH session to any ESXi in the cluster and use the following command /usr/lib/vmware/osfs/bin/objtool delete -u <UUID> -f replacing UUID with the one you want to remove. Afbeelding met tekst, schermopname, LettertypeAutomatisch gegenereerde beschrijving

After you remove all inaccessible objects and run the (vsan.checkstate -r .) once again, you should no longer see any inaccessible objects. Afbeelding met tekst, schermopname, software, LettertypeAutomatisch gegenereerde beschrijving

‘Ineligible for use by VSAN’ can’t be added to VSAN disk groups

I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk. After a correct install vSAN did not see this this disk ready for use for vSAN. Combining the following articles Dell VXRai vSAN Drives ineligible and identify-and-solve-ineligible-disk-problems-in-virtual-san/
I solved this problem with the following steps:

Step 1: Identify the Disk with vdq -qH

Step 2: Use partedUtil get “/dev/disks/<DISK>” to list all partitions:

partedUtil get “/dev/disks/t10.NVMe____Dell_Ent_NVMe_CM6_MU_3.2TB______________017D7D23E28EE38C”

Step 3: Use This disk has 2 partitions. Use the partedUtil delete “/dev/disks/<DISK>” <PARTITION> command to delete all partitions:

Afbeelding met tekst, Lettertype, schermopnameAutomatisch gegenereerde beschrijving

Step 4:

When all partitions are removed, do a rescan:

~ # esxcli storage core adapter rescan –all

Afbeelding met tekst, schermopname, software, ComputerpictogramAutomatisch gegenereerde beschrijving

Step 5: Claim Unused Disks

Afbeelding met tekst, software, Lettertype, nummerAutomatisch gegenereerde beschrijving

ESXi Unattend Install on Dell BOSS controller

I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk.

I hate to type a very complex password twice.
So automation is the key.
I love de ks.cfg install option

Sow following the following guide did not the trik:
https://www.dell.com/support/kbdoc/en-us/000177584/automating-operating-system-deployment-to-dell-boss-techniques-for-different-operating-systems

VMware ESXi Automated Install

This did not work:
install –overwritevmfs –firstdisk=”DELLBOSS VD”

After doing a manual install:
Afbeelding met tekst, schermopnameAutomatisch gegenereerde beschrijving

What works:

# For Dell Boss Controller “Dell BOSS-N1″

install –overwritevmfs –firstdisk=”Dell BOSS-N1”

My HomeLab anno 2024

My home lab is manly used for testing new stuff released by VMware (70%) en Microsoft (20%) other stuff (10%)

For the Base I use my Home PC

Intel i5 12600k
128 GB Memory
2 x 2TB Samsung 980 and 990 Pro SSD.
Windows 11 Pro
VMware Workstation Pro

On my Home PC running
Server 2022 (Eval for DC)
ESXi801 (16 GB) (NSX Demo Cluster)
ESXi802 (16 GB) (NSX Demo Cluster)
ESXi803 (64 GB) (General Cluster) )
ESXi804 (64 GB) (General Cluster)
ESXi805 (24 GB) (Single Node vSAN Cluster)
ESXi806 (16 GB) (4 Node vSAN Cluster)
ESXi807 (16 GB) (4 Node vSAN Cluster)
ESXi808 (16 GB) (4 Node vSAN Cluster)
ESXi809 (16 GB) (4 Node vSAN Cluster)

ESXi701 (24GB) (General Cluster)
ESXi702 (24GB) (General Cluster)

In general cluster there a running the most VM’s. Also here I am testing Packer and Terraform.

Afbeelding met tekst, schermopname, software, LettertypeAutomatisch gegenereerde beschrijving

For a while I used a 2TB Samsung SSD a Storage for ESXi Server through Truenas
But I wanted a larger storage for all my VM’s.

After reading on William Liam blog Synology DS723+ in Homelab and Synology NFS VAAI Plug-in support for vSphere 8.0

So I did a nice upgrade. Afbeelding met tekst, schermopname, software, nummerAutomatisch gegenereerde beschrijving

I used not the original Synology Parts. Following parts works fine.
Kingston 16 GB DDR4-3200 notebook memory
WD Red SN700, 500 GB SSD
WD Red Pro, 8 TB

* For Read-Write caching you need 2 SSD devices.

For mouting the NFS shared I created a little powercli script.

https://github.com/WardVissers/VMware-Powercli-Public/blob/main/Add%20NFS%20DataStore%20Github.ps1

VCF 5.0 running inside Nested ESXi server with only 64GB Memory

So I interested to trying to deploy latest release of VMware Cloud Foundation (VCF) 5.0 on my Windows 11 Home PC witch have 128GB and 16 core intel cpu.

William Lee wrote a nice artikel about VMware Cloud Foundation 5.0 running on Intel NUC

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Requirements:

  • VMware Cloud Builder 5.0 OVA (Build 21822418)
  • VCF 5.0 Licenses Through VMUG ADVANTAGE
  • Home PC (Not Special Hardware)
    – 128GB Memory
    – Intel 12600 CPU
    – 4TB of NVME Storage
  • Windows 11 with VMware Workstation 17

Setup

Virtual Machines

  • DC02 (Domain Controller, DNS Server) (4GB 2vcpu)
  • VCF-M01-ESX01 (ESXi 8.0 Update 1a) (64GBGB 1x140GB 2x600NVME 2x NIC) (Every Thin Provisiond)
  • VCF-M01-CB01 (4GB and 4CPU) Only needed through First Deploment

Network settings on my PC

  • 1 IP In my home network
  • 172.16.12.1 (To Fool Cloudbuilder)
  • 172.16.13.1 (To Fool Cloudbuilder)

Procedure:

Install en Configure ESXi

Step 1 – Boot up the ESXi installer from de iso mount and then perform a standard ESXi installation.

Step 2 – Once ESXi is up and running, you will need to minimally configure networking along with an FQDN (ensure proper DNS resolution), NTP and specify which SSD should be used for the vSAN capacity drive. You can use the DCUI to setup the initial networking but recommend switching to ESXi Shell afterwards and finish the require preparations steps as demonstrated in the following ESXCLI commands:

esxcli system ntp set -e true -s pool.ntp.org
esxcli system hostname set –fqdn vcf-m01-esx01.wardvissers.nl

Note: Use vdq -q command to query for the available disks for use with vSAN and ensure there are no partitions residing on the 600GB disks.
Don’t change time server pool.ntp.org.

To ensure that the self-signed TLS certificate that ESXi generates matches that of the FQDN that you had configured, we will need to regenerate the certificate and restart hostd for the changes to go into effect by running the following commands within ESXi Shell:

/bin/generate-certificates
/etc/init.d/hostd restart

Cloudbuilder Config

Step 3 – Deploy the VMware Cloud builder in a separate environment and wait for it to be accessible over the browser. Once CB is online, download the setup_vmware_cloud_builder_for_one_node_management_domain.sh setup script and transfer that to the CB system using the admin user account (root is disabled by default).

Step 4 – Switch to the root user and set the script to have the executable permission and run the script as shown below

su –
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain.sh
./setup_vmware_cloud_builder_for_one_node_management_domain.sh

The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.

A screenshot of a computerDescription automatically generated

Step 4 – Download the example JSON deployment file vcf50-management-domain-example.json and and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0 license keys.

Step 5 – The VMnic in the Cloud Builder VM will acked als a 10GB NIC so I started the deployment not through powershell but normal way in Cloud Builder GUI.

Your deployment time will vary based on your physical resources but it should eventually complete with everything show success as shown in the screenshot below. (I have one retry for finish)
A screenshot of a computerDescription automatically generated A screenshot of a cloud supportDescription automatically generated
Here are some screenshots VCF 5.0 deployment running on my home PC.

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Problems

Check this if you have problems logging in NSX:
https://www.wardvissers.nl/2023/07/26/nsx-endless-spinning-blue-cirle-after-login/

Next Steps.

1. Reploy with use of the Holo-Router https://core.vmware.com/resource/holo-toolkit-20-deploy-router#deploy-holo-router

2. Testing if can deploy Single Host VCF Workload Domain, on same way by following this blog post HERE! 😁
A screenshot of a computerDescription automatically generated

If I can start another 64GB ESXi Server.

Holodeck Toolkit Overview

Holodeck Toolkit 1.3 Overview

The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.

Graphical user interface, applicationDescription automatically generated

Delivering labs in a nested environment solves several challenges with delivering hands-on for a  product like VCF, including:  

  • Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.   
  • Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing.  Each environment needs a single external IP.
  • Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.  
  • Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap. 
  • Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.  
  • Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files.  A typical deployment takes less than 3 hours, with less than 15 min keyboard time.

Nested Environment Overview 

The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod.  VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform. 

Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level.  Each Holodeck pod connects to a unique VSS and Port Group per site.    A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.  

  • Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.  
  • When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1  

The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod: 

  • DNS (local to Site1 and Site2 within the pod, acts as forwarder) 
  • NTP (local to Site1 and Site2 within the pod) 
  • DHCP (local to Site1 and Site2 within the pod) 
  • L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site 
  • BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)

The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.

 Figure 1: Holodeck Nested Diagram

The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.

The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following 

  • Microsoft Windows Server 2019 Desktop Experience with: 
  • Active directory domain “vcf.holo.lab” 
  • DNS Forwarder to Cloud Builder  
  • Certificate Server, Web Enrollment and VMware certificate template 
  • RDP enabled 
  • IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console  
  • Firewall and IE Enhanced security disabled  
  • SDDC Commander custom desktop deployed 
  • Additional software packages deployed and configured 
  • Google Chrome with Holodeck bookmarks 
  • VMware Tools 
  • VMware PowerCLI 
  • VMware PowerVCF 
  • VMware Power Validated Solutions 
  • PuTTY SSH client 
  • VMware OVFtool 
  • Additional software packages copied to Holo-Console for later use 
  • VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder 
  • VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
    • VLC-Holo-Site-1 
    • VLC-Holo-Site-2 
  • VMware vRealize Automation 8.10 Easy Installer

The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts.  They all communicate over the VLC-A-PG Port Group   

Figure 2: Holodeck Nested Hosts

Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.

The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab

 Figure 3: Holodeck Site-2 Diagram

Accessing the Holodeck Environment

User access to the Holodeck pod is via the Holo-Console.  Access to Holo-Console is available via two paths:

VLC Holodeck Deployment Prerequisites 

  • ESXi Host Sizing   
  • Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME 
  • Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME 
  • Best (Four or more pods):  Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME 
  • ESXi Host Configuration: 
  • vSphere 7.0U3 
  • Virtual switch and port group configured with uplinks to customer network/internet  
  • Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance 
  • Multi host clusters are NOT supported
  • Holo-Build host 
  • Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only) 
  • 200GB free disk space 
  • Valid login to https://customerconnect.vmware.com  
  • Entitlement to VCF 4.5 Enterprise for 8 hosts minimum (16 hosts if planning to test Cloud Foundation Multi region with NSX Federation) 
  • License keys for the following VCF 4.5 components
    • VMware Cloud Foundation
    • VMware NSX-T Data Center Enterprise
    • VMware vSAN Enterprise 
    • VMware vSphere Enterprise Plus 
    • VMware vCenter Server (one license)
    • VMware vRealize Suite Advanced or Enterprise
    • Note: This product has been renamed VMware Aria Suite
  • External/Customer networks required
    • ESXi host management IP (one per host) 
    • Holo-Router address per pod
Translate »