First post of 2025 it like to start with a good one for me 🙂 .
Very Very Happy with it!!

First post of 2025 it like to start with a good one for me 🙂 .
Very Very Happy with it!!
Microsoft has identified an issue that affects Windows Server domain controllers (DCs), and has expedited a resolution that can be applied to affected devices. Out-of-band (OOB) updates have been released for some versions of Windows today, March 22, 2024, to addresses this issue related to a memory leak in the Local Security Authority Subsystem Service (LSASS). This occurs when on-premises and cloud-based Active Directory domain controllers service Kerberos authentication requests.
This issue is not expected to impact Home users, as it is only observed in some versions of Windows Server. Domain controllers are not commonly used in personal and home devices.
Updates are available on the Microsoft Update Catalog only. These are cumulative updates, so you do not need to apply any previous update before installing them, and they supersede all previous updates for affected versions. If your organization uses the affected server platforms as DCs and you haven’t deployed the March 2024 security updated yet, we recommend you apply this OOB update instead. For more information and instructions on how to install this update on your device, consult the below resources for your version of Windows:
Note: The OOB release for Windows Server 2019 will be released in near term.
As Windows Server 2025 Preview is officially released, I wanted to test a  automated build of the Windows Server 2025 Preview release. So that I can deploy this in my home lab and going to test the new features if I can find the time….
Hashicorp Packer is a self-contained executable producing quick and easy operating system builds across multiple platforms. Using Packer and a couple of HCL2 files, you can quickly create fully automated template(s) with latest Windows Updates en VMware Tools. When you schedule a fresh builds after patch Tuesday you have always an up-to-date and fully secured template.
When using VMware customization tools. You can spin up vm’s in minutes.
Files you need?
The files and versions I am using at the time of this writing are as follows:
Outside of downloading both Packer and Windows Server 2022 Preview build, you will need the following files:
Other considerations and tasks you will need to complete:
Like other automated approaches to installing Windows Server, the automated Windows Server 2025 Packer build requires an answer file to provide answers to the GUI automatically and other installation prompts that you normally see in a manual installation of Windows Server.
You will find the scripts here: https://github.com/WardVissers/Packer-Win2025
The only problem that I had was: Switching from Nic from Public to Private
# Set network connections profile to Private mode.
Write-Output ‘Setting the network connection profiles to Private…’
do {
$connectionProfile = Get-NetConnectionProfile
Start-Sleep -Seconds 10
} while ($connectionProfile.Name -eq ‘Identifying…’)
Set-NetConnectionProfile -Name $connectionProfile.Name -NetworkCategory Private
I had some time to check out the new version of Server 2025.
For the full upcomming features check: https://ignite.microsoft.com/en-US/sessions/f3901190-1154-45e3-9726-d2498c26c2c9?source=sessions
Download Server 2025 Preview: https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver
Server 2025 will come with a lot of features (My Top 20+):
Need to find some time to dig in
Handy link: https://techcommunity.microsoft.com/t5/windows-server-insiders/announcing-windows-server-preview-build-26040/m-p/4040858
I while ago I started with parker to create simple templates for use in my homelab.
It take some time to find the rights scripts and learning en understanding the HCL2 coding
But in related to Security reasons I want to use a Windows Core Server the smaller footprint.
What is Server Core App Compatibility Feature on Demand: https://learn.microsoft.com/en-us/windows-server/get-started/server-core-app-compatibility-feature-on-demand
Installing Features on Demand through Powerschell contains a bug. You may see “failure to download files”, “cannot download”, or errors like “0x800F0954” or file not found.
To Solve that I created I powerschell script to run the install twice: featuresondemand.ps1
You can find al the needed files on my Public Github Packer repository: https://github.com/WardVissers/Packer-Public
When running is showing like this:
It works for now, but there is one thing that would the hole thing a quiet nicer.
Passwords encrypted in a separate file.
The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.
Delivering labs in a nested environment solves several challenges with delivering hands-on for a product like VCF, including:
The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod. VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform.
Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level. Each Holodeck pod connects to a unique VSS and Port Group per site. A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.
The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod:
The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.
Figure 1: Holodeck Nested Diagram
The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.
The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following
The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts. They all communicate over the VLC-A-PG Port Group
Figure 2: Holodeck Nested Hosts
Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.
The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab
Figure 3: Holodeck Site-2 Diagram
User access to the Holodeck pod is via the Holo-Console. Access to Holo-Console is available via two paths:
First Step Shutdown ESXi Server enable Encryption
Second Add vTPM
Boot ESXi Server(s)
Configure Key Providers (Add Native Key Provider)
Now you can add vTPM to you VM
Don’t forget to enable VBS
Create GPO SRV 2022 – Virtualization Based Security and I did Apply only to my Server 2022 Lab Environment
System Information on my Server 2022 Lab Server
With all the Fabric configuration done we can test our setup.
I’m creating two overlay segments in NSX connected to a Tier-1 gateway, and after that we’ll create a Tier-0 gateway and connect the T1 gateway to it to get North/South connectivity to the overlay resources
Two VMs will be deployed, one VM in each of the two overlay segments
Create a Tier-1 Gateway
The Tier-1 Gateway will initially not be connected to a Tier-0 Gateway (I haven’t configured a T0 gw yet) or an Edge Cluster.
Tier-1 Gateway
Create Logical segments
We need two logical segments, both using the Overlay Transport Zone. I’m defining different subnets on them, 10.0.1.0/24 and 10.0.2.0/24.
Segments
Add VMs to Logical segments
We have two Photon VMs which should be added to the logical segments.
Two Photon VMs
Test connectivity
Now let’s verify that the two VMs can ping each other
Don’t forget to enable the echo rule on the Windows Firewall….
Connectivity test
This shows that the overlay is working, and note again that the Edge VMs are not in use here.
External connectivity
Traffic is flowing between VMs running on Logical segments inside the NSX-T environment, but what if we want to reach something outside, or reach a VM inside a NSX-T overlay?
Then we need to bring a Tier-0 Gateway in to the mix.
The T-0 gateway can be configured with Uplinks that are connected to the physical network. This is done through a segment which can reach the physical network, normally through a VLAN.
To configure the uplink interfaces we need to have Edge VMs so finally we get to bring those into play as well.
Create segment for uplinks
First I’ll create a segment mapped to VLAN 99 in my lab. Note that I select the VLAN transport zone, and I do not connect the segment to a gateway
Create Uplink VLAN segment
Create Tier-0 gateway
Now we’ll create a Tier-0 gateway, note that I now also select my Edge cluster.
Create T0 gateway
Static route
To be able to forward traffic out of the NSX-T environment the T0 gateway needs to know where to send queries for IPs it doesn’t control. Normally you would want to configure a routing protocol like BGP or OSPF so that the T0 gateway could exchange routes with the physical router(s) in your network.
I’ve not set up BGP or any other routing protocol on my physical router, so I’ve just configured a default static route that forwards to my physical router. The next hop is set to the gateway address for the Uplink VLAN 99, 192.168.99.1
Static route
Link T1 gateway to T0 gateway
We’ve done a lot of configuring now, but still we’ve not got connectivity in or out for our VMs. The final step is to connect the Tier-1 gateway to the Tier-0 gateway, and we’ll also activate Route Advertisement of Connected Segments and Service Ports
Tier-1 Gateway
Test connectivity
Verify North/South connectivity
Yes!
Test Distributed Firewall
Let’s also do a quick test of the Distributed Firewall feature in NSX-T.
First we’ll create a rule blocking ICMP (ping) from any to my test vm and publish the rule
ICMP firewall rule
Now let’s test pinging from from my pc to nested Windows 2016 server. With the rule not enabled en enabled.
Ping blocked
Summary
Hopefully this post can help someone, if not it has at least helped me.
Now we have working environment so we can go testing some things.
Also scripting/automation against a nsx environment I will look in to!
I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.
With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.
In NSX-T 3.11 we got the ability to have the Edge TEP on the same subnet as the hypervisor TEP. A nice write-up of this feature can be found here: https://www.virten.net/2020/11/nsx-t-3-1-enhancement-shared-esxi-and-edge-transport-vlan-with-a-single-uplink/
Lab environment
First let’s have a quick look at the lab environment:
Compute
I have 1 have one ESXi Server Dell Server R730. I use only one nic for Management en Virtual Machine Traffic.
Network
My home network consists of single VLAN
VLAN |
Subnet |
Role |
Virtual Switch |
---|---|---|---|
0 |
192.168.150.0/24 |
Management/Virtual Machine Traffic |
vSwitch0 |
Also ensure you enable the required security settings to support nested virtualization:
Virtual Machines
I run a virtualized vSphere 7 Cluster on my host
The Distributed Virtual Switches are running version 7.0.0 which let’s us deploy NSX-T on the VDS directly.
Preparations
Check out the NSX-T Data Center Workflow for vSphere for details and documentation on the process
IP Addresses and DNS records
Before deploying NSX-T in the environment I’ve prepared a few IP addresses and DNS records
Role |
IP |
---|---|
NSX Manager |
192.168.150.229 |
NSX-T Edge node 1 |
192.168.150.227 |
NSX-T Edge node 2 (currently not in use) |
192.168.150.228 |
NSX-T T0 GW Interface 1 |
192.168.99.2 |
Note that I’ve reserved addresses for a second Edge which I’m not going to use at the moment.
Deploy NSX manager appliance
VMware documentation reference
The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.
In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation
Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers
NSX-T deployment
Now let’s get rocking with our NSX-T setup!
We’ll start the NSX manager and prepare it for configuring NSX in the environment
Initial Manager config
After first login I’ll accept the EULA and optionally enable the CEIP
License
Next I’ll add the license.
Add license
Import certificate
IP Pools
Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)
TEP pool
Compute Manager
With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.
Best is a specific service account for the connection
Compute Manager added
Fabric configuration
Now we’re ready for building out our network fabric which will consist of the following:
Transport Zones
Overlay
VLAN
Transport Nodes
ESXi Hosts
Edge VMs
Edge clusters
Take a look at this summary of the Key concepts in NSX-T to learn more about them.
Transport Zone
The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.
The defaults could be used, but I like to create my own.
Transport Zones
Uplink Profiles
Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.
Again I’m creating my own profile and leave the default profiles be as they are.
Uplink profile
In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.
Transport Node Profile
Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host
In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.
We’ll also add in our newly created Transport Zones
Creating Transport Node profile
We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks
vCenter View
Creating Transport Node profile
Configure NSX on hosts
With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX
Configure cluster for NSX
Select profile
After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.
Hosts configuring
After a few minutes our hosts should be configured and ready for NSX
Hosts configured
Trunk segment
Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).
But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs
Trunk segment
Edge VM
Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.
Deploy edge VM
Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink
NSX Edge config
Edge cluster
We’ll also create an Edge cluster and add the Edge VM to it
Edge cluster
Summary
Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.
In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.
Hopefully this post can help someone, if not it has at least helped me.
Thanks for reading!
Special thnx for https://rudimartinsen.com/2021/06/29/nsx-t-31-homelab/ for his blog post
You can read more about the specific change here –> https://support.microsoft.com/en-us/help/4520412/2020-ldap-channel-binding-and-ldap-signing-requirement-for-windows you can also read more here –> https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/ldap-channel-binding-and-ldap-signing-requirements-update-now/ba-p/921536
After the change the following features will be supported against Active Directory.
Clients that rely on unsigned SASL (Negotiate, Kerberos, NTLM, or Digest) LDAP binds or on LDAP simple binds over a non-SSL/TLS connection stop working after you make this configuration change. This also applies for 3.party solutions which rely on LDAP such as Citrix NetScaler/ADC or other Network appliances, Vault and or authentication mechanisms also rely on LDAP. If you haven’t fixed this it will stop working. This update will apply for all versions.
Windows Server 2008 SP2,
Windows 7 SP1,
Windows Server 2008 R2 SP1,
Windows Server 2012,
Windows 8.1,
Windows Server 2012 R2,
Windows 10 1507,
Windows Server 2016,
Windows 10 1607,
Windows 10 1703,
Windows 10 1709,
Windows 10 1803,
Windows 10 1809,
Windows Server 2019,
Windows 10 1903,
Windows 10 1909
If the directory server is configured to reject unsigned SASL LDAP binds or LDAP simple binds over a non-SSL/TLS connection, the directory server will log a summary under eventid 2888 one time every 24 hours when such bind attempts occur. Microsoft advises administrators to enable LDAP channel binding and LDAP signing as soon as possible before March 2020 to find and fix any operating systems, applications or intermediate device compatibility issues in their environment.
You can also use this article to troubleshoot https://docs.microsoft.com/en-us/archive/blogs/russellt/identifying-clear-text-ldap-binds-to-your-dcs
Credits: https://msandbu.org/upcoming-change-microsoft-to-disable-use-of-unsigned-ldap-port-389/
You must be logged in to post a comment.