Windows Server 2025 “Preview” deployment with Packer

As Windows Server 2025 Preview is officially released, I wanted to test a  automated build of the Windows Server 2025 Preview release. So that I can deploy this in my home lab and going to test the new features if I can find the time….

About Hashicorp Packer

Hashicorp Packer is a self-contained executable producing quick and easy operating system builds across multiple platforms. Using Packer and a couple of HCL2 files, you can quickly create fully automated template(s) with latest Windows Updates en VMware Tools. When you schedule a fresh builds after patch Tuesday  you have always an up-to-date and fully secured template.

When using VMware customization tools. You can spin up vm’s in minutes.

Automated Windows Server 2025 “Preview” Build

Files you need?
The files and versions I am using at the time of this writing are as follows:

Outside of downloading both Packer and Windows Server 2022 Preview build, you will need the following files:

  • windowsserver2025.auto.pkrvars.hcl – houses the variable values you want to define.
  • windows2025.json.pkr.hcl – the Packer build file
  • Answer file – Generated with Windows System Image Manager (SIM) you can download the file below
  • Custom script file(s) – optional

Other considerations and tasks you will need to complete:

  • Copy the Windows Server 2025 ISO file to a vSphere datastore

Windows Server 2025 unattend Answer file for the automated Packer Build

Like other automated approaches to installing Windows Server, the automated Windows Server 2025 Packer build requires an answer file to provide answers to the GUI automatically and other installation prompts that you normally see in a manual installation of Windows Server.

You will find the scripts here: https://github.com/WardVissers/Packer-Win2025

The only problem that I had was: Switching from Nic from Public to Private

# Set network connections profile to Private mode.

Write-Output ‘Setting the network connection profiles to Private…’

do {

    $connectionProfile = Get-NetConnectionProfile

    Start-Sleep -Seconds 10

} while ($connectionProfile.Name -eq ‘Identifying…’)

Set-NetConnectionProfile -Name $connectionProfile.Name -NetworkCategory Private

Windows Server 2025 Preview (Build: Canary 26052)

I had some time to check out the new version of Server 2025.

For the full upcomming features check: https://ignite.microsoft.com/en-US/sessions/f3901190-1154-45e3-9726-d2498c26c2c9?source=sessions

Download Server 2025 Preview: https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver

Server 2025 will come with a lot of features (My Top 20+):

  • General – Server 2022 upgrade to .vNext (Controled bij GPO)
  • Hot Patching (Arc Enabled, Monthly Subscription)
  • Active Directory – 32k page
  • Active Directory – Numa
  • Active Directory – LDAP TLS 1.3
  • Active Directory – Improved Security for Confidential Attributes
  • Active Directory – Active Directory LDAP prefers Encryption bij Default
  • Active Directory – Kerberos Support for AES/SHA256/384
  • Active Directory – Changes to Default behavior of legacy SAM RPC Spassword change methods
  • Active Directory – Kerberos en KPINT Support cryptographic agility
  • Active Directory – New AD Forest en Domein Level (Minimal Server 2016 requirement)
  • Storage – NVME 70%/90% peformance increase
  • File Server – SMB over Internet (Quick Protocol)
  • File Server – More Control over SLTM
  • File Server – SMB Limitor (Enabeld bij Default)
  • File Server – Signing by Default
  • File Server – Minimum version SMB
  • File Server – More Secure Bij Default (Netbios disabled bij default)
  • RDS – M365 Apps stil supported for every Windows Server release 2-3 years
  • Finance – General support and Pay-as-you-go Support

Need to find some time to dig in

Handy link: https://techcommunity.microsoft.com/t5/windows-server-insiders/announcing-windows-server-preview-build-26040/m-p/4040858

Holodeck Toolkit Overview

Holodeck Toolkit 1.3 Overview

The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.

Graphical user interface, application

Description automatically generated

Delivering labs in a nested environment solves several challenges with delivering hands-on for a  product like VCF, including:  

  • Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.   
  • Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing.  Each environment needs a single external IP.
  • Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.  
  • Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap. 
  • Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.  
  • Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files.  A typical deployment takes less than 3 hours, with less than 15 min keyboard time.

Nested Environment Overview 

The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod.  VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform. 

Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level.  Each Holodeck pod connects to a unique VSS and Port Group per site.    A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.  

  • Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.  
  • When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1  

The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod: 

  • DNS (local to Site1 and Site2 within the pod, acts as forwarder) 
  • NTP (local to Site1 and Site2 within the pod) 
  • DHCP (local to Site1 and Site2 within the pod) 
  • L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site 
  • BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)

The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.

 Figure 1: Holodeck Nested Diagram

The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.

The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following 

  • Microsoft Windows Server 2019 Desktop Experience with: 
  • Active directory domain “vcf.holo.lab” 
  • DNS Forwarder to Cloud Builder  
  • Certificate Server, Web Enrollment and VMware certificate template 
  • RDP enabled 
  • IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console  
  • Firewall and IE Enhanced security disabled  
  • SDDC Commander custom desktop deployed 
  • Additional software packages deployed and configured 
  • Google Chrome with Holodeck bookmarks 
  • VMware Tools 
  • VMware PowerCLI 
  • VMware PowerVCF 
  • VMware Power Validated Solutions 
  • PuTTY SSH client 
  • VMware OVFtool 
  • Additional software packages copied to Holo-Console for later use 
  • VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder 
  • VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
    • VLC-Holo-Site-1 
    • VLC-Holo-Site-2 
  • VMware vRealize Automation 8.10 Easy Installer

The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts.  They all communicate over the VLC-A-PG Port Group   

Figure 2: Holodeck Nested Hosts

Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.

The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab

 Figure 3: Holodeck Site-2 Diagram

Accessing the Holodeck Environment

User access to the Holodeck pod is via the Holo-Console.  Access to Holo-Console is available via two paths:

VLC Holodeck Deployment Prerequisites 

  • ESXi Host Sizing   
  • Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME 
  • Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME 
  • Best (Four or more pods):  Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME 
  • ESXi Host Configuration: 
  • vSphere 7.0U3 
  • Virtual switch and port group configured with uplinks to customer network/internet  
  • Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance 
  • Multi host clusters are NOT supported
  • Holo-Build host 
  • Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only) 
  • 200GB free disk space 
  • Valid login to https://customerconnect.vmware.com  
  • Entitlement to VCF 4.5 Enterprise for 8 hosts minimum (16 hosts if planning to test Cloud Foundation Multi region with NSX Federation) 
  • License keys for the following VCF 4.5 components
    • VMware Cloud Foundation
    • VMware NSX-T Data Center Enterprise
    • VMware vSAN Enterprise 
    • VMware vSphere Enterprise Plus 
    • VMware vCenter Server (one license)
    • VMware vRealize Suite Advanced or Enterprise
    • Note: This product has been renamed VMware Aria Suite
  • External/Customer networks required
    • ESXi host management IP (one per host) 
    • Holo-Router address per pod

Building NSX-T 3.1 Home lab Step 1

I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.

With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.

In NSX-T 3.11 we got the ability to have the Edge TEP on the same subnet as the hypervisor TEP. A nice write-up of this feature can be found here: https://www.virten.net/2020/11/nsx-t-3-1-enhancement-shared-esxi-and-edge-transport-vlan-with-a-single-uplink/

Lab environment

First let’s have a quick look at the lab environment:

Compute

I have 1 have one ESXi Server Dell Server R730. I use only one nic for Management en Virtual Machine Traffic.

Network

My home network consists of single VLAN

VLAN

Subnet

Role

Virtual Switch

0

192.168.150.0/24

Management/Virtual Machine Traffic

vSwitch0

 Also ensure you enable the required security settings to support nested virtualization:

Virtual Machines

I run a virtualized vSphere 7 Cluster on my host

Afbeelding met tekst

Automatisch gegenereerde beschrijving

The Distributed Virtual Switches are running version 7.0.0 which let’s us deploy NSX-T on the VDS directly.

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Afbeelding met tafel

Automatisch gegenereerde beschrijving

Preparations

Check out the NSX-T Data Center Workflow for vSphere for details and documentation on the process

IP Addresses and DNS records

Before deploying NSX-T in the environment I’ve prepared a few IP addresses and DNS records

Role

IP

NSX Manager

192.168.150.229

NSX-T Edge node 1

192.168.150.227

NSX-T Edge node 2 (currently not in use)

192.168.150.228

NSX-T T0 GW Interface 1

192.168.99.2

Note that I’ve reserved addresses for a second Edge which I’m not going to use at the moment.

Deploy NSX manager appliance

VMware documentation reference

The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.

In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation

Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers

NSX-T deployment

Now let’s get rocking with our NSX-T setup!

We’ll start the NSX manager and prepare it for configuring NSX in the environment

Initial Manager config

After first login I’ll accept the EULA and optionally enable the CEIP

License

Next I’ll add the license.

Add license

Import certificate

Imported certificates

IP Pools

Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)

Afbeelding met tekst, schermafbeelding, monitor

Automatisch gegenereerde beschrijving

TEP pool

Compute Manager

With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.

Best is a specific service account for the connection

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Compute Manager added

Fabric configuration

Now we’re ready for building out our network fabric which will consist of the following:

Transport Zones

Overlay

VLAN

Transport Nodes

ESXi Hosts

Edge VMs

Edge clusters

Take a look at this summary of the Key concepts in NSX-T to learn more about them.

Transport Zone

The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.

The defaults could be used, but I like to create my own.

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Transport Zones

Uplink Profiles

Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.

Again I’m creating my own profile and leave the default profiles be as they are.

Afbeelding met tekst, monitor, zwart, schermafbeelding

Automatisch gegenereerde beschrijving

Uplink profile

In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.

Transport Node Profile

Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host

In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.

We’ll also add in our newly created Transport Zones

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Creating Transport Node profile

We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks

vCenter View

Creating Transport Node profile

Configure NSX on hosts

With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX

Configure cluster for NSX

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Select profile

After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.

Hosts configuring

After a few minutes our hosts should be configured and ready for NSX

Afbeelding met tekst, schermafbeelding, monitor, scherm

Automatisch gegenereerde beschrijving

Hosts configured

Trunk segment

Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).

But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Trunk segment

Edge VM

Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.

Deploy edge VM

Deploy edge VM

Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink

NSX Edge config

Edge cluster

We’ll also create an Edge cluster and add the Edge VM to it

Edge cluster

Summary

Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.

In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.

Hopefully this post can help someone, if not it has at least helped me.

Thanks for reading!

Special thnx for https://rudimartinsen.com/2021/06/29/nsx-t-31-homelab/ for his blog post

Ultimate Cross vCenter Script

Last year i attend the Dutch VMUG (NLVMUG) i followed session from

Michael Wilmsen that was: Migrate your datacenter without downtime.

I must also move al lot of VM’s from different datacenters to other datacenters.
I use the script from Michael Wilmsen to move the VM’s. But along the way I counter some problems with this script. So I begon tweaking and tweaking and tweaking this script to create for me the ultimate Cross vCenter PowerCLI Script.

Coolfeatures:
– Info through Whattsapp (Default not enabled)
– Dryrun (Test Run)
– Logging
– Selection through GUI
– Multiple Nic support maximum of 4.
– Datastore en Host selection based on Free space en Free Memory
– Check of Destination Host or Datastore in Maintance
– Destination Store exist in Destination Cluster

MoveVM.ps1:
#Filename: MoveVM.ps1
#Author: M. Wilmsen / W. Vissers
#Source: http://virtual-hike.com/nlvmug-2018/
#Version: 2.0
#Date: 21-10-2018
#ChangeLog:
# V0.9 – M. Wilmsen First Version
# V1.0 – Fixed Multiple Nics to maximium of 4 nics
#      – Logfile name VM name
# V1.1 – Destination Cluster not the first Host
# V1.2 – Selected Destination host based on memory used
# V1.3 – Fixed folder location and VirtualPortGroup
# V1.4 – Fixed Datastore in Maintance
# V1.5 – Using Get-VICredentialStoreItem + Logpath Fixt
# V1.6 – Fixed Log in Hours in 24 uurs
# V1.7 – Fixed Using DatastoreCluster name based on Cluster name!
# V1.8 – Check if Destination has the same datastore
#          – Ask know for input
#          – VM selection with VMhost
#          – Fixed Ping Check
# v1.9 – Added Destination Store exist in Destination Cluster
# v2.0 – Fixed Destination Store exist in Destination Cluster
<#
.SYNOPSIS
Script to migrate a virtual machine
.DESCRIPTION
Script to migrate compute and storage from cluster to cluster. Log will be in current dir [VM]-[-timestamp].log

.EXAMPLE
MoveVM.ps1
#>
################################## INIT #################################################
#Set WebOperation timeout
# set-PowerCLIConfiguration -WebOperationTimeoutSeconds 3600
#Define Global variables
$location = “D:\xmovewhattsapp”
$LogPath = “.\”
$DataStoreClusterPrefix = “SAN-“
$SourceVC = Read-Host “Give Source vCenter”
$DestinationVC = Read-Host “Give Destination vCenter”
$DRSRecommendation = $true
$Dryrun = $false
$SendWhatsApp = $false
$WhatsAppNumbers = “0123456789”
$WhatsAppGroup = “Namehireyourwhattsgroup”
$instanceId = “23” #chang this line
$clientId = “demo@demo.nl” #change this line
$clientSecret = “Puthiersecretid” #change this line
################################## PASSWORD STORE ##############################################
#Username
# Check if credentials exist in credential store if not ask for credentials and put them in credential store

If ((Get-VICredentialStoreItem).host -notcontains $SourceVC) {New-VICredentialStoreItem -Host $SourceVC -User $env:USERNAME -Password ((get-credential).GetNetworkCredential().Password)}
If ((Get-VICredentialStoreItem).host -notcontains $DestinationVC) {New-VICredentialStoreItem -Host $DestinationVC -User $env:USERNAME -Password ((get-credential).GetNetworkCredential().Password)}

# Remove-VICredentialStoreItem * -Confirm:$false

################################## END INIT #################################################
################################## FUNCTIONS #################################################
#Define log function
Function LogWrite
{
    Param ([string]$logstring)
    #Add logtime to entry
    $LogTime = Get-Date -Format “MM-dd-yyyy_HH-mm-ss”
    $logstring = $LogTime + ” : ” + $logstring
    #Write logstring
    Add-content $LogFile -value $logstring
    Write-Host $logstring
}
#Define SendWhatsApp function
Function SendWhatsApp
{
   Param ([string] $message)
  
   if ( $SendWhatsApp ) {
     $LogTime = Get-Date -Format “MM-dd-yyyy_hh-mm-ss”
     $message = $logtime + ” : ” + $message
    
     foreach ( $number in $WhatsAppNumbers )
     {
        $jsonObj = @{‘group_admin’=$number;
                     ‘group_name’=$WhatsAppGroup;
                     ‘message’=$message;}
       Try {
         $res = Invoke-WebRequest -Uri “http://api.whatsmate.net/v2/whatsapp/group/message/$instanceId” `
                           -Method Post   `
                           -Headers @{“X-WM-CLIENT-ID”=$clientId; “X-WM-CLIENT-SECRET”=$clientSecret;} `
                           -Body (ConvertTo-Json $jsonObj)
         LogWrite “WhatsMate Status Code: ”  $res.StatusCode
         LogWrite $res.Content
       }
       Catch {
         $result = $_.Exception.Response.GetResponseStream()
         $reader = New-Object System.IO.StreamReader($result)
         $reader.BaseStream.Position = 0
         $reader.DiscardBufferedData()
         $responseBody = $reader.ReadToEnd();

        Write-host “Status Code: ” $_.Exception.Response.StatusCode
         Write-host $message
         }
     }
   }
}

function Get-VmSize($vm)
{
     #Initialize variables
     $VmDirs =@()
     $VmSize = 0
     $searchSpec = New-Object VMware.Vim.HostDatastoreBrowserSearchSpec
     $searchSpec.details = New-Object VMware.Vim.FileQueryFlags
     $searchSpec.details.fileSize = $TRUE
     Get-View -VIObject $vm | % {
         #Create an array with the vm’s directories
         $VmDirs += $_.Config.Files.VmPathName.split(“/”)[0]
         $VmDirs += $_.Config.Files.SnapshotDirectory.split(“/”)[0]
         $VmDirs += $_.Config.Files.SuspendDirectory.split(“/”)[0]
         $VmDirs += $_.Config.Files.LogDirectory.split(“/”)[0]
         #Add directories of the vm’s virtual disk files
         foreach ($disk in $_.Layout.Disk) {
             foreach ($diskfile in $disk.diskfile){
                 $VmDirs += $diskfile.split(“/”)[0]
             }
         }
         #Only take unique array items
         $VmDirs = $VmDirs | Sort | Get-Unique
         foreach ($dir in $VmDirs){
             $ds = Get-Datastore ($dir.split(“[“)[1]).split(“]”)[0]
             $dsb = Get-View (($ds | get-view).Browser)
             $taskMoRef  = $dsb.SearchDatastoreSubFolders_Task($dir,$searchSpec)
             $task = Get-View $taskMoRef
             while($task.Info.State -eq “running” -or $task.Info.State -eq “queued”){$task = Get-View $taskMoRef }
             foreach ($result in $task.Info.Result){
                 foreach ($file in $result.File){
                     $VmSize += $file.FileSize
                 }
             }
         }
     }
     return $VmSize
}
################################## END FUNCTIONS #################################################
#Login to vCenter servers
if (($global:DefaultVIServers).Name -notcontains $SourceVC -or $DestinationVC) {

#SourceVC
$ConnectVC = Connect-VIServer $SourceVC
$Message = “Connecting to ” + $ConnectVC  + ” as ” + $env:USERNAME
#Logwrite $Message
#DestionationVC
$ConnectVC = Connect-VIServer $DestinationVC
$Message = “Connecting ” + $ConnectVC + ” as ” + $env:USERNAME
#Logwrite $Message

# Disconnect-VIServer * -Confirm:$false

}
Set-Location $location

$cluster=Get-Cluster -Server $SourceVC  | Out-GridView -OutputMode Single -Title “Select Source Cluster”
$vmtomigrate =Get-Cluster $cluster -Server $SourceVC | Get-VM | Out-GridView -OutputMode Single -Title “Select VM”
$DestinationCluster = Get-Cluster -Server $DestinationVC | Out-GridView -OutputMode Single -Title “Select Destination Cluster”
$vmfolder=Get-folder -Server $DestinationVC | Out-GridView -OutputMode Single -Title “Select Folder”

#Main Script

    #Set $MigError to false befor migration
     $MigError = $false
     #Get VM variables
     $vm = get-vm $vmtomigrate
    
     #Define LogFile with time stamp
     $LogTime = Get-Date -Format “MM-dd-yyyy_hh-mm-ss”
    
     if([IO.Directory]::Exists($LogPath))
     {
     #Do Nothing!!
     }
     else
     {
     New-Item -ItemType directory -Path $LogPath
     }
     $LogFile = $LogPath+$VM+”-“+$LogTime+”.log”
    
     # LogWrite Gebruiker
    
     Logwrite $env:USERNAME

    # Get-VM Info   
    
     $VMHDDSize = Get-VmSize($vm)
     $VMHDDSize = [Math]::Round(($VMHDDSize / 1GB),2)

    Logwrite “Start Virtual Machine Move”
     #If WhatsApp make notice
     if ( $SendWhatsApp ) { LogWrite “Notifications will be send using WhatsApp to WhatsApp Group: $WhatsAppGroup” }
     #If DryRun make Notice
     if ( $Dryrun ) {
     Logwrite “Start move virtual machines $vm Disksize $VMHDDSize GB (DryRun)”
     SendWhatsApp “Start move virtual machines $vm Disksize $VMHDDSize GB(DryRun)”
     }
     else {
     Logwrite “Start move virtual machines $vm Disksize $VMHDDSize GB”
     SendWhatsApp “Start move virtual machines $vm Disksize $VMHDDSize GB”
     }
     $SourceCluster = get-vm $vm | Get-Cluster | select name
     $vmip = $vm  | Select @{N=”IP Address”;E={@($_.guest.IPAddress[0])}}
     $vmip = $vmip.”ip address”
     $VMHDDSize = Get-VmSize($vm)
     $VMHDDSize = [Math]::Round(($VMHDDSize / 1GB),2)
     $NetworkAdapter = Get-NetworkAdapter -VM $vm -Server $SourceVC
     $SourceVMPortGroup = Get-NetworkAdapter -vm $vm | Select NetworkName
     $switchname = $DestinationCluster
    

     $Datastore = Get-VM $vm | Get-DataStore -Server $sourceVC | Select @{N=”Name”;E={@($_.Name)}}
     $Datastore = $Datastore.Name
     $DatastoreExistinOthervCenter = Get-Cluster $DestinationCluster | Get-DataStore -Server $DestinationVC | ? {$_.Name -like “*$Datastore*”}

     if ($DatastoreExistinOthervCenter )
      {
      LogWrite  “Datastore exsist $DestinationCluster in  destination vCenter $DestinationVC “
      $destinationDatastore = $DatastoreExistinOthervCenter }
      Else
      {
      LogWrite  “Datastore does not exsist in $DestinationCluster destination vCenter $DestinationVC”
      # Select DataStore with the most free space and not in maintance
      $DatastoreCluster = “$DataStoreClusterPrefix”+”$DestinationCluster”
      $destinationDatastore = Get-DatastoreCluster $DatastoreCluster | Get-Datastore | Where {$_.State -ne “Maintenance”} | Sort-Object -Property FreeSpaceGB -Descending | Select-Object -First 1
      }

     $destinationDatastoreFreeSpace = $destinationDatastore | Select Name,@{N=”FreeSpace”;E={$_.ExtensionData.Summary.FreeSpace}}
      $destinationDatastoreFreeSpace = [Math]::Round(($destinationDatastoreFreeSpace.”FreeSpace” / 1GB),2)

    # Select the host with the less used memory
   
     $DestinationHost = Get-Cluster –Name $DestinationCluster –Server $DestinationVC | Get-VMhost -State Connected | Sort-Object -Property MemoryUsageGB | Select-Object -First 1
            
     # Change Here if you have a vm with multiple Network Cards (Remove the # for the multiple nics)
    
     if ($NetworkAdapter.Count-eq 1) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
      }
     elseif ($NetworkAdapter.Count-eq 2) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic2”
     }
     elseif ($NetworkAdapter.Count-eq 3) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic2”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic3”
     }
     elseif ($NetworkAdapter.Count-eq 4) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic2”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic3”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic4”
     }

    LogWrite “Start move: $vm”
     Logwrite “VM IP: $vmip”
     Logwrite “VM Disk Used (GB): $VMHDDSize”
     Logwrite “VM Folder: $vmfolder”
     Logwrite “Source vCenter: $SourceVC”
     Logwrite “VM Source Cluster: $SourceCluster”
     Logwrite “Destination vCenter: $DestinationVC”
     Logwrite “VM Destination Cluster: $DestinationCluster”
     Logwrite “Destination host: $DestinationHost”
     LogWrite “VM Source PortGroup: $SourceVMPortGroup”
     LogWrite “VM Destination Portgroup: $DestinationVMPortgroup”
     Logwrite “VM Destination Datastore: $destinationDatastore”
     LogWrite “Destination Datastore FreeSpace GB: $destinationDatastoreFreeSpace “
     if ( $Dryrun ) {
       $FreespaceAfterMigration = $destinationDatastoreFreeSpace – $VMHDDSize
       if ( $FreespaceAfterMigration -lt 0 ) { Logwrite “ERROR: Datastore $destinationDatastore does not have sufficient freespace! Virtual Machine needs $VMHDDSize. Only $destinationDatastoreFreeSpace available.” }
       else { Logwrite “Virtual Machine will fit on datastore $destinationDatastore. Freespace after migration is: $FreespaceAfterMigration GB” }
     }
    #Test if VM responsed to ping
    if ($vmip -eq $null) {
     LogWrite “Virtual Machine ip address not known”
     Logwrite “No ping check will be performed after moving the Virtual Machine”
     }
    else {
         Test-Connection -comp $vmip -quiet
         LogWrite “Virtual Machine $vm response to ping before being moved. Virtual machine will be checked after being moved”
         $PingVM = $true
     }
      
     #if ( $VMHDDSize -eq
     if ( -NOT $Dryrun) {
       #Migrate VM to cluster
       LogWrite “Move $vm to vCenter $DestinationVC and datastore $DestinationDatastore”
       Try {
         $Result = Move-VM -VM $vm `
                            -Destination $DestinationHost `
                            -Datastore $DestinationDatastore `
                            -NetworkAdapter $NetworkAdapter `
                            -PortGroup $DestinationVMPortgroup `
                            -ErrorAction Stop
           }
       Catch {
         $ErrorMessage = $_.Exception.Message
         LogWrite “ERROR: Move of $vm to cluster $DestinationHost failed!!!”
         Logwrite “ERROR: Move Status Code:  $ErrorMessage”
         SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage”
         $MigError = $true   
       }
       #Migrate VM to folder
       LogWrite “Move $vm to vCenter $vmfolder”
       Try {
         $VMtemp = get-vm $vm
         $Result = Move-VM -VM $vmtemp -InventoryLocation $vmfolder -ErrorAction Stop
           }
       Catch {
         $ErrorMessage = $_.Exception.Message
         LogWrite “ERROR: Move of $vm to folder $vmfolder failed!!!”
         Logwrite “ERROR: Move Status Code:  $ErrorMessage”
         SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage”
         $MigError = $true   
         }
       }
    
     $MigError = $false
     #Test if VM is running on destination cluster
     if ( -NOT $MigError -AND -NOT $Dryrun ) {
       LogWrite “Check $vm is registered in $DestinationVC”
       try {
         $CheckVM = get-vm -name $vm -server $DestinationVC -ErrorAction Stop
 
         if ( $CheckVM ) {
           Logwrite “$vm registered in $DestinationVC”
         }
         else {
           Logwrite “ERROR: $vm not found in $DestinationVC”
         }
       }
       catch {
         $ErrorMessage = $_.Exception.Message
         Logwrite “ERROR: $vm not found in $DestinationVC”
         Logwrite “ERROR: $ErrorMessage”
         SendWhatsApp “ERROR move: $vm not found in $DestinationVC”
       }
     }
     #Test is VM response to ping, if $PingVM = $True
     if ($PingVM) {
       if (Test-Connection -comp $vmip -quiet) {
         LogWrite “Virtual Machine $vm response to ping after move”
         SendWhatsApp “Virtual Machine $vm response to ping after move”
       } 
     }
     sleep 1
     SendWhatsApp “Finished move action: $vm from $SourceVC to $DestinationVC”
     Logwrite “Finished move action: $vm from $SourceVC to $DestinationVC”

if ($DRSRecommendation)
  {
   Get-DrsRecommendation -Cluster $DestinationCluster -Server $DestinationVC | Apply-DrsRecommendation
   Logwrite “DRS Recommendatation applyed”
  }
Else
  {
  Logwrite “No DRS Recommendatation applyed”
  Write-Host “No DRS Recommendatation applyed”
  }  
 

#Disconnect from vCenter servers
Logwrite “Disconnect from vCenter servers $SourceVC $DestinationVC”
Disconnect-viserver $SourceVC -Confirm:$false
Disconnect-viserver $DestinationVC -Confirm:$false
Logwrite “Finished moving virtual machines, exiting…..”
SendWhatsApp “Finished moving virtual machines, exiting…..”

VMware vSphere PowerCLI 11.0

VMware vSphere PowerCLI 11.0 New Features

New features available on  VMware vSphere PowerCLI 11.0 is to support the new all updates and release of VMware products , find the below following has been features,

  • New Security module
  • vSphere 6.7 Update 1
  • NSX-T 2.3
  • Horizon View 7.6
  • vCloud Director 9.5
  • Host Profiles – new cmdlets for interacting with
  • New Storage Module updates
  • NSX-T in VMware Cloud on AWS
  • Cloud module multiplatform support
  • Get-ErrorReport cmdlet has been updated
  • PCloud module has been removed
  • HA module has been removed

Now we will go through above mentioned new features to find what functionality it bring to PowerCLI 11.0

What is PowerCLI 11.0 New Security Module

The new security module brings more powerful automation features to PowerCLI 11.0 available  new cmdlets include the following

  • Get-SecurityInfo
  • Get-VTpm
  • Get-VTpmCertificate
  • Get-VTpmCSR
  • New-VTpm
  • Remove-VTpm
  • Set-VTpm
  • Unlock-VM

Also New-VM cmdlet has enhanced functionality with the security module functionality and it includes parameters like KmsCluster, StoragePolicy, SkipHardDisks etc which can be used while creating new virtual machines with PowerCLI .In addition to that  Set-VM, Set-VMHost, Set-HardDisk, and New-HardDisk cmdlets are added.

Host Profile Additions

There are few additions to the VMware.VimAutomation.Core module that will make managing host profiles from PowerCLI

  • Get-VMHostProfileUserConfiguration
  • Set-VMHostProfileUserConfiguration
  • Get-VMHostProfileStorageDeviceConfiguration
  • Set-VMHostProfileStorageDeviceConfiguration
  • Get-VMHostProfileImageCacheConfiguration
  • Set-VMHostProfileImageCacheConfiguration
  • Get-VMHostProfileVmPortGroupConfiguration
  • Set-VMHostProfileVmPortGroupConfiguration

Storage Module Updates

These new Storage Module updates specifically for VMware vSAN , the updates has predefined time ranges when using Get-VsanStat. In addition  Get-VsanDisk has additional new properites that are returned including capacity, used percentage, and reserved percentage. Following are the  cmdlets have been added to automate vSAN

  • Get-VsanObject
  • Get-VsanComponent
  • Get-VsanEvacuationPlan – provides information regarding bringing a host into maintenance mode and the impact of the operation on the data, movement, etc

Additionally  following modules have been removed

  • PCloud module
  • HA module

Download now and start using

Update-module VMware.Powercli

Useful Links

HPE Oneview Powershell Clear Alerts

You need: https://github.com/HewlettPackard/POSH-HPOneView/wiki

################################################################
#
#Naam:              Oneview Clear Alarms
#Version:           0.1
#Author:            Ward Vissers
################################################################

 

# Adding PowerCLI core snapin, also check if powerCLI module is alsready added

if (!(get-module -name HPOneView.400 -erroraction silentlycontinue)) {
import-module HPOneView.400 | out-null
}

 

$oneviewserver = “oneview.wardvissers.nl”
Connect-HPOVMgmt -Hostname $oneviewserver
Get-HPOVAlert | Set-HPOVAlert -Cleared
Get-HPOVAlert -State Active | Set-HPOVAlert -Cleared

Microsoft Exchange Memory Corruption Vulnerability

A remote code execution vulnerability exists in Microsoft Exchange software when the software fails to properly handle objects in memory. An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the System user. An attacker could then install programs; view, change, or delete data; or create new accounts.

Exploitation of the vulnerability requires that a specially crafted email be sent to a vulnerable Exchange server.

The security update addresses the vulnerability by correcting how Microsoft Exchange handles objects in memory.

Download:

Product Link
Microsoft Exchange Server 2010 Service Pack 3 Update Rollup 21

4091243

Microsoft Exchange Server 2013 Cumulative Update 19

4092041

Microsoft Exchange Server 2013 Cumulative Update 20

4092041

Microsoft Exchange Server 2013 Service Pack 1

4092041

Microsoft Exchange Server 2016 Cumulative Update 8

4092041

Microsoft Exchange Server 2016 Cumulative Update 9

4092041

Important information before upgrading to vSphere 6.7 (KB53704)

This article provides important documentation and upgrade information that must be reviewed before upgrading to vSphere 6.7.


Resolution


Compatibility considerations

TLS protocols

These products are not compatible with vSphere 6.7 at this time:

  • VMware NSX
  • VMware Integrated OpenStack (VIO)
  • VMware vSphere Integrated Containers (VIC)
  • VMware Horizon

Environments with these products should not be upgraded to vSphere 6.7 at this time. This article and the VMware Product Interoperability Matrixes will be updated when a compatible release is available.

Upgrade Considerations

Before upgrading your environment to vSphere 6.7, review these critical articles to ensure a successful upgrade
For vSphere

Upgrades to vSphere 6.7 are only possible from vSphere 6.0 or vSphere 6.5. If you are currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.

For vCenter Server

For Distributed Virtual Switches

VMware OS Optimization Tool Version b1097 Released

2018-03-30, VMware announced a new version of the VMware OS Optimization Tool meaning the latest and greatest version is now b1097.

Fixes and enhancements to this version includes:

  • [Template] Issue fix – DELETEVALUE actions do not do anything
  • [Template] Issue fix – DISM commands missing /NoRestart switch
  • [Tool] Issue fix – Switching to another tab loses all unsaved changes
  • [Tool] Enhancement – Simplify user interaction in Template Editor. Now editing template no longer requires repeated Update button click. Mac style editing is applied (Automatically save changes along with edit)

For those of you not aware of this tool it is used to optimise Windows 7/8/2008/2012/10 for Horizon View deployments and it performs the following actions:

  • Local Analyze/Optimize
  • Remote Analyze
  • Optimization History and Rollback
  • Managing Templates

Read more and download VMware OS Optimization Tool Version b1097 here.

Translate »