Setting Up KubeDoom on Kubernetes: A Beginner’s Guide

I followed William Lam’s article about MS-A2 VCF 9.0 Lab: Configuring vSphere Kubernetes Service (VKS)

I don’t have much experience with Kubernetes but wanted to try some new things.

The only container that I have running is Home Assistant on Docker.

Got to try to get Kubedoom working. So I did with the following steps.
Maybe in the near future I’ll try to add more games: Retro DOS Games on Kubernetes

Finally having a Kubernetes cluster version 1.32, which was required for running KubeDoom.

Afbeelding met tekst, schermopnameDoor AI gegenereerde inhoud is mogelijk onjuist.

Download kubectl

Afbeelding met tekst, schermopname, brief, ontwerpDoor AI gegenereerde inhoud is mogelijk onjuist.

mkdir d:\kubectl

Extract the downloaded ZIP file and place both executables (kubectl.exe and kubectl-vsphere.exe) in a folder such as: d:\kubectl

.\kubectl version –client

mkdir .kube

cd .kube

New-Item config -type file

.\kubectl vsphere login –server=https://31.31.0.7 –insecure-skip-tls-verify

Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

kubectl –kubeconfig=kubernetes-cluster-jzvx-kubeconfig.yaml get pods

kubectl –kubeconfig=kubernetes-cluster-jzvx-kubeconfig.yaml apply -f kubedoom.yaml

Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

kubectl –kubeconfig=kubernetes-cluster-jzvx-kubeconfig.yaml -n kubedoom get svc

Afbeelding met tekst, schermopname, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

The password to Kubedoom is idbehold

Download VNC Viewer: https://www.realvnc.com/en/connect/download/viewer

Afbeelding met tekst, schermopname, pc-game, MultimediasoftwareDoor AI gegenereerde inhoud is mogelijk onjuist.

This is funny and cool!

Addressing Edge Server Issues in VCF 9.0 after a Shutdown

After shutting down the VCF 9.0 environment completely, I had 2 strange things.

I followed the shutdown order for the management domain.

Edge servers lost from NIC 2 and NIC 3 the network configuration
Edge Servers did not start, not correctly

Afbeelding met schermopname, tekst, software, MultimediasoftwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met schermopname, tekst, cirkel, LettertypeDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, software, nummerDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met Multimediasoftware, software, tekst, schermopnameDoor AI gegenereerde inhoud is mogelijk onjuist.

I deployed a third edge server. Two new distributed port groups were created.

These are missing for the edge01a and edge01b servers. These should not be deleted/missing after shutdown.

Afbeelding met tekst, schermopname, software, MultimediasoftwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Create a distributed port group with a VLAN trunk

Afbeelding met schermopname, tekst, software, MultimediasoftwareDoor AI gegenereerde inhoud is mogelijk onjuist.

Set NIC 2 and 3 to this network.
Afbeelding met tekst, schermopname, software, nummerDoor AI gegenereerde inhoud is mogelijk onjuist.

Edge Nodes’ Edge Cluster is running fine again 😊

Afbeelding met tekst, Lettertype, schermopnameDoor AI gegenereerde inhoud is mogelijk onjuist.

Afbeelding met tekst, schermopname, software, MultimediasoftwareDoor AI gegenereerde inhoud is mogelijk onjuist.

I found this on the Daniel Kruger Blog: https://sdn-warrior.org/posts/vcf9-ms-a2-special/

This is a known issue with VCF 9.0.0: When an edge is created by the Setup Network Connectivity UI, the system-created dvpg consumed by the edge gets deleted when the edge is powered on after 24 hrs. The port group assigned to the NSX Edge uplink has disappeared, making it impossible to use the network through NSX Edge. You can find the release notes here.

Build my lab nested, but not enough resources to do a full upgrade to 9.0.1.
So I build the lab again, but physically. More about that later!

I love testing things; you learn so much!

Updated ouut-of-band (OOB) updates are released for March 2024 for Windows Server Domain Controllers

Microsoft has identified an issue that affects Windows Server domain controllers (DCs), and has expedited a resolution that can be applied to affected devices. Out-of-band (OOB) updates have been released for some versions of Windows today, March 22, 2024, to addresses this issue related to a memory leak in the Local Security Authority Subsystem Service (LSASS). This occurs when on-premises and cloud-based Active Directory domain controllers service Kerberos authentication requests.

This issue is not expected to impact Home users, as it is only observed in some versions of Windows Server. Domain controllers are not commonly used in personal and home devices.

Updates are available on the Microsoft Update Catalog only. These are cumulative updates, so you do not need to apply any previous update before installing them, and they supersede all previous updates for affected versions. If your organization uses the affected server platforms as DCs and you haven’t deployed the March 2024 security updated yet, we recommend you apply this OOB update instead. For more information and instructions on how to install this update on your device, consult the below resources for your version of Windows:

  • Windows Server 2022KB5037422
  • Windows Server 2019: Available soon
  • Windows Server 2016KB5037423
  • Windows Server 2012 R2KB5037426

Note: The OOB release for Windows Server 2019 will be released in near term.

Windows Server 2025 “Preview” deployment with Packer

As Windows Server 2025 Preview is officially released, I wanted to test a  automated build of the Windows Server 2025 Preview release. So that I can deploy this in my home lab and going to test the new features if I can find the time….

About Hashicorp Packer

Hashicorp Packer is a self-contained executable producing quick and easy operating system builds across multiple platforms. Using Packer and a couple of HCL2 files, you can quickly create fully automated template(s) with latest Windows Updates en VMware Tools. When you schedule a fresh builds after patch Tuesday  you have always an up-to-date and fully secured template.

When using VMware customization tools. You can spin up vm’s in minutes.

Automated Windows Server 2025 “Preview” Build

Files you need?
The files and versions I am using at the time of this writing are as follows:

Outside of downloading both Packer and Windows Server 2022 Preview build, you will need the following files:

  • windowsserver2025.auto.pkrvars.hcl – houses the variable values you want to define.
  • windows2025.json.pkr.hcl – the Packer build file
  • Answer file – Generated with Windows System Image Manager (SIM) you can download the file below
  • Custom script file(s) – optional

Other considerations and tasks you will need to complete:

  • Copy the Windows Server 2025 ISO file to a vSphere datastore

Windows Server 2025 unattend Answer file for the automated Packer Build

Like other automated approaches to installing Windows Server, the automated Windows Server 2025 Packer build requires an answer file to provide answers to the GUI automatically and other installation prompts that you normally see in a manual installation of Windows Server.

You will find the scripts here: https://github.com/WardVissers/Packer-Win2025

The only problem that I had was: Switching from Nic from Public to Private

# Set network connections profile to Private mode.

Write-Output ‘Setting the network connection profiles to Private…’

do {

    $connectionProfile = Get-NetConnectionProfile

    Start-Sleep -Seconds 10

} while ($connectionProfile.Name -eq ‘Identifying…’)

Set-NetConnectionProfile -Name $connectionProfile.Name -NetworkCategory Private

Windows Server 2025 Preview (Build: Canary 26052)

I had some time to check out the new version of Server 2025.

For the full upcomming features check: https://ignite.microsoft.com/en-US/sessions/f3901190-1154-45e3-9726-d2498c26c2c9?source=sessions

Download Server 2025 Preview: https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver

Server 2025 will come with a lot of features (My Top 20+):

  • General – Server 2022 upgrade to .vNext (Controled bij GPO)
  • Hot Patching (Arc Enabled, Monthly Subscription)
  • Active Directory – 32k page
  • Active Directory – Numa
  • Active Directory – LDAP TLS 1.3
  • Active Directory – Improved Security for Confidential Attributes
  • Active Directory – Active Directory LDAP prefers Encryption bij Default
  • Active Directory – Kerberos Support for AES/SHA256/384
  • Active Directory – Changes to Default behavior of legacy SAM RPC Spassword change methods
  • Active Directory – Kerberos en KPINT Support cryptographic agility
  • Active Directory – New AD Forest en Domein Level (Minimal Server 2016 requirement)
  • Storage – NVME 70%/90% peformance increase
  • File Server – SMB over Internet (Quick Protocol)
  • File Server – More Control over SLTM
  • File Server – SMB Limitor (Enabeld bij Default)
  • File Server – Signing by Default
  • File Server – Minimum version SMB
  • File Server – More Secure Bij Default (Netbios disabled bij default)
  • RDS – M365 Apps stil supported for every Windows Server release 2-3 years
  • Finance – General support and Pay-as-you-go Support

Need to find some time to dig in

Handy link: https://techcommunity.microsoft.com/t5/windows-server-insiders/announcing-windows-server-preview-build-26040/m-p/4040858

Deploy Windows Core Server 2022 with Server Core App Compatibility Feature on Demand with Packer

I while ago I started with parker to create simple templates for use in my homelab.

It take some time to find the rights scripts and learning en understanding the HCL2 coding

But in related to Security reasons I want to use a Windows Core Server the smaller footprint.

What is Server Core App Compatibility Feature on Demand: https://learn.microsoft.com/en-us/windows-server/get-started/server-core-app-compatibility-feature-on-demand

Installing Features on Demand through Powerschell contains a bug. You may see “failure to download files”, “cannot download”, or errors like “0x800F0954” or file not found.

To Solve that I created I powerschell script to run the install twice: featuresondemand.ps1

You can find al the needed files on my Public Github Packer repository: https://github.com/WardVissers/Packer-Public

When running is showing like this:

A blue screen with white squaresDescription automatically generated

It works for now, but there is one thing that would the hole thing a quiet nicer.
Passwords encrypted in a separate file.

Holodeck Toolkit Overview

Holodeck Toolkit 1.3 Overview

The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.

Graphical user interface, applicationDescription automatically generated

Delivering labs in a nested environment solves several challenges with delivering hands-on for a  product like VCF, including:  

  • Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.   
  • Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing.  Each environment needs a single external IP.
  • Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.  
  • Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap. 
  • Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.  
  • Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files.  A typical deployment takes less than 3 hours, with less than 15 min keyboard time.

Nested Environment Overview 

The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod.  VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform. 

Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level.  Each Holodeck pod connects to a unique VSS and Port Group per site.    A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.  

  • Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.  
  • When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1  

The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod: 

  • DNS (local to Site1 and Site2 within the pod, acts as forwarder) 
  • NTP (local to Site1 and Site2 within the pod) 
  • DHCP (local to Site1 and Site2 within the pod) 
  • L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site 
  • BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)

The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.

 Figure 1: Holodeck Nested Diagram

The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.

The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following 

  • Microsoft Windows Server 2019 Desktop Experience with: 
  • Active directory domain “vcf.holo.lab” 
  • DNS Forwarder to Cloud Builder  
  • Certificate Server, Web Enrollment and VMware certificate template 
  • RDP enabled 
  • IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console  
  • Firewall and IE Enhanced security disabled  
  • SDDC Commander custom desktop deployed 
  • Additional software packages deployed and configured 
  • Google Chrome with Holodeck bookmarks 
  • VMware Tools 
  • VMware PowerCLI 
  • VMware PowerVCF 
  • VMware Power Validated Solutions 
  • PuTTY SSH client 
  • VMware OVFtool 
  • Additional software packages copied to Holo-Console for later use 
  • VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder 
  • VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
    • VLC-Holo-Site-1 
    • VLC-Holo-Site-2 
  • VMware vRealize Automation 8.10 Easy Installer

The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts.  They all communicate over the VLC-A-PG Port Group   

Figure 2: Holodeck Nested Hosts

Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.

The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab

 Figure 3: Holodeck Site-2 Diagram

Accessing the Holodeck Environment

User access to the Holodeck pod is via the Holo-Console.  Access to Holo-Console is available via two paths:

VLC Holodeck Deployment Prerequisites 

  • ESXi Host Sizing   
  • Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME 
  • Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME 
  • Best (Four or more pods):  Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME 
  • ESXi Host Configuration: 
  • vSphere 7.0U3 
  • Virtual switch and port group configured with uplinks to customer network/internet  
  • Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance 
  • Multi host clusters are NOT supported
  • Holo-Build host 
  • Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only) 
  • 200GB free disk space 
  • Valid login to https://customerconnect.vmware.com  
  • Entitlement to VCF 4.5 Enterprise for 8 hosts minimum (16 hosts if planning to test Cloud Foundation Multi region with NSX Federation) 
  • License keys for the following VCF 4.5 components
    • VMware Cloud Foundation
    • VMware NSX-T Data Center Enterprise
    • VMware vSAN Enterprise 
    • VMware vSphere Enterprise Plus 
    • VMware vCenter Server (one license)
    • VMware vRealize Suite Advanced or Enterprise
    • Note: This product has been renamed VMware Aria Suite
  • External/Customer networks required
    • ESXi host management IP (one per host) 
    • Holo-Router address per pod

Enable Virtualization-based Security on a Virtual Machine on Nested ESXi Server in VMware Workstation

First Step Shutdown ESXi Server enable Encryption
Graphical user interface, applicationDescription automatically generated

Second Add vTPM

Graphical user interface, application, WordDescription automatically generated

Boot ESXi Server(s)

Configure Key Providers (Add Native Key Provider)

Graphical user interface, text, application, websiteDescription automatically generated

A screenshot of a computerDescription automatically generated

Now you can add vTPM to you VM
Don’t forget to enable VBS

Graphical user interface, applicationDescription automatically generated

Create GPO SRV 2022 – Virtualization Based Security and I did Apply only to my Server 2022 Lab Environment
Graphical user interface, applicationDescription automatically generated

System Information on my Server 2022 Lab Server
A screenshot of a computerDescription automatically generated with medium confidence

NSX-T 3.1 Home lab – Part 2

With all the Fabric configuration done we can test our setup.

I’m creating two overlay segments in NSX connected to a Tier-1 gateway, and after that we’ll create a Tier-0 gateway and connect the T1 gateway to it to get North/South connectivity to the overlay resources

Two VMs will be deployed, one VM in each of the two overlay segments

Create a Tier-1 Gateway

The Tier-1 Gateway will initially not be connected to a Tier-0 Gateway (I haven’t configured a T0 gw yet) or an Edge Cluster.

Tier-1 Gateway

Tier-1 Gateway

Create Logical segments

We need two logical segments, both using the Overlay Transport Zone. I’m defining different subnets on them, 10.0.1.0/24 and 10.0.2.0/24.

Afbeelding met tekst, monitor, schermafbeelding, televisieAutomatisch gegenereerde beschrijving

Segments

Add VMs to Logical segments

We have two Photon VMs which should be added to the logical segments.

Two Photon VMs

Test connectivity

Afbeelding met tekstAutomatisch gegenereerde beschrijving Afbeelding met tekstAutomatisch gegenereerde beschrijving

Now let’s verify that the two VMs can ping each other

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Don’t forget to enable the echo rule on the Windows Firewall….

Connectivity test

This shows that the overlay is working, and note again that the Edge VMs are not in use here.

External connectivity

Traffic is flowing between VMs running on Logical segments inside the NSX-T environment, but what if we want to reach something outside, or reach a VM inside a NSX-T overlay?

Then we need to bring a Tier-0 Gateway in to the mix.

The T-0 gateway can be configured with Uplinks that are connected to the physical network. This is done through a segment which can reach the physical network, normally through a VLAN.

To configure the uplink interfaces we need to have Edge VMs so finally we get to bring those into play as well.

Create segment for uplinks

First I’ll create a segment mapped to VLAN 99 in my lab. Note that I select the VLAN transport zone, and I do not connect the segment to a gateway

Create Uplink VLAN segment

Create Uplink VLAN segment

Create Tier-0 gateway

Now we’ll create a Tier-0 gateway, note that I now also select my Edge cluster.

Afbeelding met tekst, schermafbeelding, monitorAutomatisch gegenereerde beschrijving

Create T0 gateway

Static route

To be able to forward traffic out of the NSX-T environment the T0 gateway needs to know where to send queries for IPs it doesn’t control. Normally you would want to configure a routing protocol like BGP or OSPF so that the T0 gateway could exchange routes with the physical router(s) in your network.

I’ve not set up BGP or any other routing protocol on my physical router, so I’ve just configured a default static route that forwards to my physical router. The next hop is set to the gateway address for the Uplink VLAN 99, 192.168.99.1

Static route

Static route

Link T1 gateway to T0 gateway

We’ve done a lot of configuring now, but still we’ve not got connectivity in or out for our VMs. The final step is to connect the Tier-1 gateway to the Tier-0 gateway, and we’ll also activate Route Advertisement of Connected Segments and Service Ports

Afbeelding met tekst, schermafbeelding, monitor, zwartAutomatisch gegenereerde beschrijving

Tier-1 Gateway

Test connectivity

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Verify North/South connectivity

Yes!

Test Distributed Firewall

Let’s also do a quick test of the Distributed Firewall feature in NSX-T.

First we’ll create a rule blocking ICMP (ping) from any to my test vm and publish the rule

Afbeelding met tekst, schermafbeelding, monitor, binnenAutomatisch gegenereerde beschrijving ICMP firewall rule

ICMP firewall rule

Now let’s test pinging from from my pc to nested Windows 2016 server. With the rule not enabled en enabled.

Afbeelding met tekstAutomatisch gegenereerde beschrijving

Ping blocked

Summary

Hopefully this post can help someone, if not it has at least helped me.

Now we have working environment so we can go testing some things.
Also scripting/automation against a nsx environment I will look in to!

Translate »