The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.
Delivering labs in a nested environment solves several challenges with delivering hands-on for a product like VCF, including:
Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.
Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing. Each environment needs a single external IP.
Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.
Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap.
Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.
Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files. A typical deployment takes less than 3 hours, with less than 15 min keyboard time.
Nested Environment Overview
The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod. VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform.
Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level. Each Holodeck pod connects to a unique VSS and Port Group per site. A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.
Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.
When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1
The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod:
DNS (local to Site1 and Site2 within the pod, acts as forwarder)
NTP (local to Site1 and Site2 within the pod)
DHCP (local to Site1 and Site2 within the pod)
L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site
BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)
The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.
Figure 1: Holodeck Nested Diagram
The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.
The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following
Microsoft Windows Server 2019 Desktop Experience with:
Active directory domain “vcf.holo.lab”
DNS Forwarder to Cloud Builder
Certificate Server, Web Enrollment and VMware certificate template
RDP enabled
IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console
Firewall and IE Enhanced security disabled
SDDC Commander custom desktop deployed
Additional software packages deployed and configured
Google Chrome with Holodeck bookmarks
VMware Tools
VMware PowerCLI
VMware PowerVCF
VMware Power Validated Solutions
PuTTY SSH client
VMware OVFtool
Additional software packages copied to Holo-Console for later use
VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder
VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
VLC-Holo-Site-1
VLC-Holo-Site-2
VMware vRealize Automation 8.10 Easy Installer
The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts. They all communicate over the VLC-A-PG Port Group
Figure 2: Holodeck Nested Hosts
Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.
The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab
Figure 3: Holodeck Site-2 Diagram
Accessing the Holodeck Environment
User access to the Holodeck pod is via the Holo-Console. Access to Holo-Console is available via two paths:
Microsoft Remote Desktop Protocol (RDP) connection to the external IP of the Holo-Router. Holo-Router is configured to forward all RDP traffic to the instance of Holo-Console inside the pod.
Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME
Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME
Best (Four or more pods): Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME
ESXi Host Configuration:
vSphere 7.0U3
Virtual switch and port group configured with uplinks to customer network/internet
Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance
Multi host clusters are NOT supported
Holo-Build host
Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only)
I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.
With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.
The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.
In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation
Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers
NSX-T deployment
Now let’s get rocking with our NSX-T setup!
We’ll start the NSX manager and prepare it for configuring NSX in the environment
Initial Manager config
After first login I’ll accept the EULA and optionally enable the CEIP
License
Next I’ll add the license.
Add license
Import certificate
IP Pools
Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)
TEP pool
Compute Manager
With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.
Best is a specific service account for the connection
Compute Manager added
Fabric configuration
Now we’re ready for building out our network fabric which will consist of the following:
Transport Zones
Overlay
VLAN
Transport Nodes
ESXi Hosts
Edge VMs
Edge clusters
Take a look at this summary of the Key concepts in NSX-T to learn more about them.
Transport Zone
The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.
The defaults could be used, but I like to create my own.
Transport Zones
Uplink Profiles
Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.
Again I’m creating my own profile and leave the default profiles be as they are.
Uplink profile
In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.
Transport Node Profile
Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host
In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.
We’ll also add in our newly created Transport Zones
Creating Transport Node profile
We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks
vCenter View
Creating Transport Node profile
Configure NSX on hosts
With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX
Configure cluster for NSX
Select profile
After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.
Hosts configuring
After a few minutes our hosts should be configured and ready for NSX
Hosts configured
Trunk segment
Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).
But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs
Trunk segment
Edge VM
Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.
Deploy edge VM
Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink
NSX Edge config
Edge cluster
We’ll also create an Edge cluster and add the Edge VM to it
Edge cluster
Summary
Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.
In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.
Hopefully this post can help someone, if not it has at least helped me.
I must also move al lot of VM’s from different datacenters to other datacenters. I use the script from Michael Wilmsen to move the VM’s. But along the way I counter some problems with this script. So I begon tweaking and tweaking and tweaking this script to create for me the ultimate Cross vCenter PowerCLI Script.
Coolfeatures: – Info through Whattsapp (Default not enabled) – Dryrun (Test Run) – Logging – Selection through GUI – Multiple Nic support maximum of 4. – Datastore en Host selection based on Free space en Free Memory – Check of Destination Host or Datastore in Maintance – Destination Store exist in Destination Cluster
MoveVM.ps1: #Filename: MoveVM.ps1
#Author: M. Wilmsen / W. Vissers
#Source: http://virtual-hike.com/nlvmug-2018/
#Version: 2.0
#Date: 21-10-2018
#ChangeLog:
# V0.9 – M. Wilmsen First Version
# V1.0 – Fixed Multiple Nics to maximium of 4 nics
# – Logfile name VM name
# V1.1 – Destination Cluster not the first Host
# V1.2 – Selected Destination host based on memory used
# V1.3 – Fixed folder location and VirtualPortGroup
# V1.4 – Fixed Datastore in Maintance
# V1.5 – Using Get-VICredentialStoreItem + Logpath Fixt
# V1.6 – Fixed Log in Hours in 24 uurs
# V1.7 – Fixed Using DatastoreCluster name based on Cluster name!
# V1.8 – Check if Destination has the same datastore
# – Ask know for input
# – VM selection with VMhost
# – Fixed Ping Check
# v1.9 – Added Destination Store exist in Destination Cluster
# v2.0 – Fixed Destination Store exist in Destination Cluster
<#
.SYNOPSIS
Script to migrate a virtual machine
.DESCRIPTION
Script to migrate compute and storage from cluster to cluster. Log will be in current dir [VM]-[-timestamp].log
.EXAMPLE
MoveVM.ps1
#>
################################## INIT #################################################
#Set WebOperation timeout
# set-PowerCLIConfiguration -WebOperationTimeoutSeconds 3600
#Define Global variables
$location = “D:\xmovewhattsapp”
$LogPath = “.\”
$DataStoreClusterPrefix = “SAN-“
$SourceVC = Read-Host “Give Source vCenter”
$DestinationVC = Read-Host “Give Destination vCenter”
$DRSRecommendation = $true
$Dryrun = $false
$SendWhatsApp = $false
$WhatsAppNumbers = “0123456789”
$WhatsAppGroup = “Namehireyourwhattsgroup”
$instanceId = “23” #chang this line
$clientId = “demo@demo.nl” #change this line
$clientSecret = “Puthiersecretid” #change this line
################################## PASSWORD STORE ##############################################
#Username
# Check if credentials exist in credential store if not ask for credentials and put them in credential store
if ($DatastoreExistinOthervCenter ) { LogWrite “Datastore exsist $DestinationCluster in destination vCenter $DestinationVC “ $destinationDatastore = $DatastoreExistinOthervCenter } Else { LogWrite “Datastore does not exsist in $DestinationCluster destination vCenter $DestinationVC” # Select DataStore with the most free space and not in maintance $DatastoreCluster = “$DataStoreClusterPrefix”+”$DestinationCluster” $destinationDatastore = Get-DatastoreCluster $DatastoreCluster | Get-Datastore | Where {$_.State -ne “Maintenance”} | Sort-Object -Property FreeSpaceGB -Descending | Select-Object -First 1 }
LogWrite “Start move: $vm” Logwrite “VM IP: $vmip” Logwrite “VM Disk Used (GB): $VMHDDSize” Logwrite “VM Folder: $vmfolder” Logwrite “Source vCenter: $SourceVC” Logwrite “VM Source Cluster: $SourceCluster” Logwrite “Destination vCenter: $DestinationVC” Logwrite “VM Destination Cluster: $DestinationCluster” Logwrite “Destination host: $DestinationHost” LogWrite “VM Source PortGroup: $SourceVMPortGroup” LogWrite “VM Destination Portgroup: $DestinationVMPortgroup” Logwrite “VM Destination Datastore: $destinationDatastore” LogWrite “Destination Datastore FreeSpace GB: $destinationDatastoreFreeSpace “ if ( $Dryrun ) { $FreespaceAfterMigration = $destinationDatastoreFreeSpace – $VMHDDSize if ( $FreespaceAfterMigration -lt 0 ) { Logwrite “ERROR: Datastore $destinationDatastore does not have sufficient freespace! Virtual Machine needs $VMHDDSize. Only $destinationDatastoreFreeSpace available.” } else { Logwrite “Virtual Machine will fit on datastore $destinationDatastore. Freespace after migration is: $FreespaceAfterMigration GB” } } #Test if VM responsed to ping if ($vmip -eq $null) { LogWrite “Virtual Machine ip address not known” Logwrite “No ping check will be performed after moving the Virtual Machine” } else { Test-Connection -comp $vmip -quiet LogWrite “Virtual Machine $vm response to ping before being moved. Virtual machine will be checked after being moved” $PingVM = $true }
#if ( $VMHDDSize -eq if ( -NOT $Dryrun) { #Migrate VM to cluster LogWrite “Move $vm to vCenter $DestinationVC and datastore $DestinationDatastore” Try { $Result = Move-VM -VM $vm ` -Destination $DestinationHost ` -Datastore $DestinationDatastore ` -NetworkAdapter $NetworkAdapter ` -PortGroup $DestinationVMPortgroup ` -ErrorAction Stop } Catch { $ErrorMessage = $_.Exception.Message LogWrite “ERROR: Move of $vm to cluster $DestinationHost failed!!!” Logwrite “ERROR: Move Status Code: $ErrorMessage” SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage” $MigError = $true } #Migrate VM to folder LogWrite “Move $vm to vCenter $vmfolder” Try { $VMtemp = get-vm $vm $Result = Move-VM -VM $vmtemp -InventoryLocation $vmfolder -ErrorAction Stop } Catch { $ErrorMessage = $_.Exception.Message LogWrite “ERROR: Move of $vm to folder $vmfolder failed!!!” Logwrite “ERROR: Move Status Code: $ErrorMessage” SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage” $MigError = $true } }
$MigError = $false #Test if VM is running on destination cluster if ( -NOT $MigError -AND -NOT $Dryrun ) { LogWrite “Check $vm is registered in $DestinationVC” try { $CheckVM = get-vm -name $vm -server $DestinationVC -ErrorAction Stop
if ( $CheckVM ) { Logwrite “$vm registered in $DestinationVC” } else { Logwrite “ERROR: $vm not found in $DestinationVC” } } catch { $ErrorMessage = $_.Exception.Message Logwrite “ERROR: $vm not found in $DestinationVC” Logwrite “ERROR: $ErrorMessage” SendWhatsApp “ERROR move: $vm not found in $DestinationVC” } } #Test is VM response to ping, if $PingVM = $True if ($PingVM) { if (Test-Connection -comp $vmip -quiet) { LogWrite “Virtual Machine $vm response to ping after move” SendWhatsApp “Virtual Machine $vm response to ping after move” } } sleep 1 SendWhatsApp “Finished move action: $vm from $SourceVC to $DestinationVC” Logwrite “Finished move action: $vm from $SourceVC to $DestinationVC”
New features available on VMware vSphere PowerCLI 11.0 is to support the new all updates and release of VMware products , find the below following has been features,
New Security module
vSphere 6.7 Update 1
NSX-T 2.3
Horizon View 7.6
vCloud Director 9.5
Host Profiles – new cmdlets for interacting with
New Storage Module updates
NSX-T in VMware Cloud on AWS
Cloud module multiplatform support
Get-ErrorReport cmdlet has been updated
PCloud module has been removed
HA module has been removed
Now we will go through above mentioned new features to find what functionality it bring to PowerCLI 11.0
What is PowerCLI 11.0 New Security Module
The new security module brings more powerful automation features to PowerCLI 11.0 available new cmdlets include the following
Get-SecurityInfo
Get-VTpm
Get-VTpmCertificate
Get-VTpmCSR
New-VTpm
Remove-VTpm
Set-VTpm
Unlock-VM
Also New-VM cmdlet has enhanced functionality with the security module functionality and it includes parameters like KmsCluster, StoragePolicy, SkipHardDisks etc which can be used while creating new virtual machines with PowerCLI .In addition to that Set-VM, Set-VMHost, Set-HardDisk, and New-HardDisk cmdlets are added.
Host Profile Additions
There are few additions to the VMware.VimAutomation.Core module that will make managing host profiles from PowerCLI
Get-VMHostProfileUserConfiguration
Set-VMHostProfileUserConfiguration
Get-VMHostProfileStorageDeviceConfiguration
Set-VMHostProfileStorageDeviceConfiguration
Get-VMHostProfileImageCacheConfiguration
Set-VMHostProfileImageCacheConfiguration
Get-VMHostProfileVmPortGroupConfiguration
Set-VMHostProfileVmPortGroupConfiguration
Storage Module Updates
These new Storage Module updates specifically for VMware vSAN , the updates has predefined time ranges when using Get-VsanStat. In addition Get-VsanDisk has additional new properites that are returned including capacity, used percentage, and reserved percentage. Following are the cmdlets have been added to automate vSAN
Get-VsanObject
Get-VsanComponent
Get-VsanEvacuationPlan – provides information regarding bringing a host into maintenance mode and the impact of the operation on the data, movement, etc
A remote code execution vulnerability exists in Microsoft Exchange software when the software fails to properly handle objects in memory. An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the System user. An attacker could then install programs; view, change, or delete data; or create new accounts.
Exploitation of the vulnerability requires that a specially crafted email be sent to a vulnerable Exchange server.
The security update addresses the vulnerability by correcting how Microsoft Exchange handles objects in memory.
Download:
Product
Link
Microsoft Exchange Server 2010 Service Pack 3 Update Rollup 21
These products are not compatible with vSphere 6.7 at this time:
VMware NSX
VMware Integrated OpenStack (VIO)
VMware vSphere Integrated Containers (VIC)
VMware Horizon
Environments with these products should not be upgraded to vSphere 6.7 at this time. This article and the VMware Product Interoperability Matrixes will be updated when a compatible release is available.
Upgrade Considerations
Before upgrading your environment to vSphere 6.7, review these critical articles to ensure a successful upgrade For vSphere
It is not possible to upgrade directly from vSphere 5.5 to vSphere 6.7.
Upgrades to vSphere 6.7 are only possible from vSphere 6.0 or vSphere 6.5. If you are currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.
[Tool] Issue fix – Switching to another tab loses all unsaved changes
[Tool] Enhancement – Simplify user interaction in Template Editor. Now editing template no longer requires repeated Update button click. Mac style editing is applied (Automatically save changes along with edit)
For those of you not aware of this tool it is used to optimise Windows 7/8/2008/2012/10 for Horizon View deployments and it performs the following actions:
Many new items have been introduced, such as HTML5 video redirection support for the Chrome browser and the ability to configure Windows Start menu shortcuts for desktop and application pools using the Horizon Administrator console. As always, you can count on increased operating system support for virtual desktops and clients.
Here is an overview of the new features:
VMware Horizon 7.3 Server Enhancements
Horizon Help Desk Tool
Displays application process resources with reset control
Role-based access control for help desk staff
Activity logging for help desk staff
Displays Horizon Client information
Granular logon time metrics
Blast Extreme display protocol metrics
Instant Clone Technology
Instant-clone desktops can now use dedicated assignment to preserve the hostname, IP address and MAC address of a user’s desktop
Windows Server OS is now supported for desktop use
Instant clones are now compatible with Storage DRS (sDRS)
If there are no internal VMs in all four internal folders created in vSphere Web Client, these folders are unprotected, and you can delete them
IcUnprotect.cmd utility can now unprotect or delete template, replica or parent VMs or folders from vSphere hosts
Windows Start Menu Shortcuts Created Using the Admin Console
Create shortcuts to Horizon 7 resources:
Published applications
Desktops
Global entitlements
Cloud Pod Architecture Scale
Total session limit is increased to 140,000
The site limit is now seven
VMware Horizon Apps
This update makes Horizon Apps easier to use and allows the administrator to restrict entitlements
Restrict access to desktop and application pools from specific client machines
Resiliency for Monitoring
If the event database shuts down, Horizon administrator maintains an audit trail of the events that occur before and after the event database shutdown
Database Support
Always-On Availability Groups feature for Microsoft SQL Server 2014
ADMX Templates
Additional GPO settings for ThinPrint printer filtering, HTML5 redirection and enforcement of desktop wallpaper settings
Remote Experience
Horizon Virtualization Pack for Skype for Business
Multiparty audio and video conferencing
Horizon 7 RDSH support
Windows Server 2008 R2
Windows Server 2012 R2
Forward Error Correction (FEC)
Quality of Experience (QOE) metrics
Customized ringtones
Call park and pickup
E911 (Enhanced 911) support, to allow the location of the mobile caller to be known to the call receiver
USB desktop-tethering support
Horizon Client for Linux support for the following Linux distributions:
Ubuntu 12.04 (32-bit)
Ubuntu 14.04 (32 & 64-bit)
Ubuntu 16.04 (64-bit)
RHEL 6.9/CentOS 6.x (64-bit)
RHEL 7.3 (64-bit)
SLED12 SP2 (64-bit)
Additional NVIDIA GRID vGPU Support
Support for the Tesla P40 graphics card from NVIDIA
HTML5 Video Redirection
View HTML 5 video from a Chrome browser and have video redirected to the client endpoint for smoother and more efficient video playback
Performance Counter Improvements
Windows agent PerfMon counters improvements for Blast Extreme sessions: imaging, audio, client-drive redirection (CDR), USB and virtual printing
Linux Virtual Desktops
KDE support: Besides RHEL/CentOS 6.x, the KDE GUI is now supported on RHEL/CentOS 7.x, Ubuntu 14.04/16.04 and SUSE Linux Enterprise Desktop 11 SP4
MATE interface is now supported on Ubuntu 14.04 and Ubuntu 16.04
Blast Extreme Adaptive Transport is now supported for Linux desktops
vGPU hardware H.264 encoder support has been added
USB Redirection
USB redirection is supported in nested mode
ThinPrint Filtering
Administrators can filter out printers that should not be redirected
Horizon Client 4.6 Updates
Security Update
All clients have been updated to use SHA-2 to prevent SHA-1 collision attacks
Session Pre-launch
Session pre-launch is now extended to both Horizon Client for macOS and Horizon Client for Windows
Apteligent
Integration of Apteligent crash log
Blast Extreme
Improvements in Blast Extreme Adaptive Transport mode for iOS and macOS
User can change Blast Extreme settings without having to disconnect
Horizon Client 4.6 for Windows
Support for UNC path with CDR
Horizon Client 4.6 for macOS
Support for macOS Sierra and macOS High Sierra
Selective monitor support
Norwegian keyboard support
Horizon Client 4.6 for iOS
CDR support with drag and drop of files in split view
iOS split keyboard enhancement
iOS UI updates
Horizon Client 4.6 for Android
Android Oreo support
Manage the Horizon server list with VMware AirWatch
Simple shortcuts
External mouse enhancements
Real-Time Audio-Video (RTAV) support for Android and Chrome OS
Horizon Client 4.6 for Linux
Blast Extreme Adaptive Transport support
Horizon Client 4.6 for Windows 10 UWP
Network recovery improvements
Horizon HTML Access 4.6
HTML Access for Android with a revised UI
Customization of HTML Access page
Horizon Help Desk Tool
The Horizon Help Desk Tool provides a troubleshooting interface for the help desk that is installed by default on Connection Servers. To access the Horizon Help Desk Tool, navigate to https://<CS_FQDN>/helpdesk, where <CS_FQDN> is the fully qualified domain name of the Connection Server, or click the Help Desk button in the Horizon Administrator console.
The Help Desk Tool was introduced in Horizon 7.2 and has been greatly expanded upon in the Horizon 7.3 release.
Help Desktop Tool features with Horizon 7.2:
Virtual machine metrics
Remote assistance
Session control (restart, logoff, reset, and disconnect)
Sending messages
Additional features with Horizon 7.3:
Display application process resources with reset control
Role-based access control for help desk staff
Activity logging for help desk staff
Granular login time metrics
Display Horizon Client information
User Session Details
The user session details appear on the Details tab when you click a user name in the Computer Name option on the Sessions tab. You can view details for Horizon Client, the VDI desktop or RDSH-published desktop, CPU and memory stats, and many other details.
Client version
Unified Access Gateway name and IP address
Logon breakdown (client to broker):
Brokering
GPO load
Profile load
Interactive
Authentication
Blast Extreme Metrics
Blast extreme metrics that have been added include estimated bandwidth (uplink), packet loss, and transmitted and received traffic counters for imaging, audio, and CDR.
Note the following behavior:
The text-based counters do not auto-update in the dashboard. Close and reopen the session details to refresh the information.
The counters for transmitted and received traffic counters are accumulative from the point the session is queried/polled.
Blast Extreme Metrics for a Windows 10 Virtual Desktop Session
Display and Reset Application Processes and Resources
This new feature provides help desk staff with a granular option to resolve problematic processes without affecting the entire user session, similar to Windows Task Manager. The session processes appear on the Processes tab when you click a user name in the Computer Name option on the Sessions tab. For each user session, you can view additional details about CPU- and memory-related processes to diagnose issues.
Role-based Access Control and Custom Roles
You can assign the following predefined administrator roles to Horizon Help Desk Tool administrators to delegate the troubleshooting tasks between administrator users:
Help Desk Administrator
Help Desk Administrator (Read Only)
You can also create custom roles by assigning the Manage Help Desk (Read Only) privilege along with any other privileges based on the Help Desk Administrator role or Help Desk Administrator (Read Only) role.
Members of the Help Desk Administrators (Read Only) role do not have access to following controls; in fact, functions such as Log Off and Reset are not presented in the user interface.
Watch this brief demonstration video of the Horizon Help Desk Tool to see it in action:
Horizon Virtualization Pack for Skype for Business
You can now make optimized audio and video calls with Skype for Business inside a virtual desktop without negatively affecting the virtual infrastructure and overloading the network.
All media processing takes place on the client machine instead of in the virtual desktop during a Skype audio and video call.
Horizon Virtualization Pack for Skype for Business offers the following supported features:
System Requirements
The following table outlines the system requirements for the new release:
Supported Clients
The following table provides the list of support Horizon clients:
Start Menu Shortcuts Configured Through the Admin Console
This feature improves the user experience by adding desktop and application shortcuts to the Start menu of Windows client devices.
You can use Horizon Administrator to create shortcuts for the following types of Horizon 7 resources:
Published applications
Desktops
Global entitlements
Shortcuts appear in the Windows Start menu and are configured by IT. Shortcuts can be categorized into folders.
Users can choose at login whether to have shortcuts added to the Start menu on their Windows endpoint device.
Watch this brief demonstration video of the new Desktop and Apps Shortcuts feature to see it in action:
Dedicated Desktop Support for Instant Clones
Upon the initial release of instant clones in Horizon 7, we supported floating desktop pools and assignments only. Further investments have been made to Instant Clone Technology that add support for dedicated desktop pools. Fixed assignments and entitlements of users to instant-clone machines is now provided as part of Horizon 7.3.
Dedicated instant-clone desktop assignment means that there is a 1:1 relationship between users and desktops. Once an end user is assigned to a desktop, they will consistently receive access to the same desktop and corresponding virtual machine. This feature is important for apps that require a consistent hostname, IP address, or MAC address to function properly.
Note: Persistent disks are not supported. Fixed assignments to desktops does not mean persistence for changes. Any changes that the user makes to the desktop while in-session will not be preserved after logoff, which is similar to how a floating desktop pool works. With dedicated assignment, when the user logs out, a resync operation on the master image retains the VM name, IP address, and MAC address.
Support for the Tesla P40 Graphics Card from NVIDIA
VMware has expanded NVIDIA GRID support with Tesla P40 GPU cards in Horizon 7.3.
HTML5 Video Redirection
This feature provides the ability to take the HTML5 video from a Chrome (version 58 or higher) browser inside a Windows VDI or RDSH system and redirect it to Windows clients. This feature uses Blast Extreme or PCoIP side channels along with a Chrome extension.
The redirected video is overlaid on the client and is enabled as well as managed using GPO settings.
Benefits include:
Supports generic sites such as YouTube, without requiring a server-side plugin.
Provides smooth video playback comparable to the native experience of playing video inside a browser on the local client system.
Reduces data center network traffic and CPU utilization on the vSphere infrastructure hosts.
Improved USB Redirection with User Environment Manager
The default User Environment Manager timeout value has been increased. This change ensures that the USB redirection Smart Policy takes effect even when the login process takes longer than expected.
With Horizon Client 4.6, the User Environment Manager timeout value is configured only on the agent and is sent from the agent to the client.
You can now bypass User Environment Manager control of USB redirection by setting a registry key on the agent machine (VDI desktop or RDSH server). This change ensures that smart card SSO works on Teradici zero clients. Note: Requires a restart.
The Windows Agent PerfMon counters for the Blast Extreme protocol have been improved to update at a constant rate and to be even more accurate.
Counters include:
Imaging
Audio
CDR
USB
Virtual printing
Linux Virtual Desktops
Features and functions for Horizon 7 for Linux virtual desktops have been expanded:
KDE support – Besides RHEL/CentOS 6.x, the KDE GUI is now supported on RHEL/CentOS 7.x, Ubuntu 14.04/16.04, SUSE Linux Enterprise Desktop 11 SP4.
Support for the MATE interface on Ubuntu 14.04, Ubuntu 16.04.
Blast Extreme Adaptive Transport support.
vGPU hardware H.264 encoder support.
USB Redirection Support in Nested Mode
The USB redirection feature is now supported when you use Horizon Client in nested mode. When using nesting–for example, when opening RDSH applications from a VDI desktop–you can now redirect USB devices from the client device to the first virtualization layer and then redirect the same USB device to the second virtualization layer (that is, nested session).
Filtering Redirected Printers
You can now create a filter to specify the printers that should not be redirected with ThinPrint. A new GPO ADMX template (vmd_printing_agent.admx) has been added to enable this functionality.
By default, the rule permits all client printers to be redirected.
Supported attributes:
PrinterName
DriverName
VendorName
Supported operators:
AND
OR
NOT
Supported searching pattern is a regular expression.
Blast Extreme Improvements in CPU Usage
Now even lower CPU usage is achieved with adaptive Forward Error Correction algorithms. This clever mechanism decides how to handle error correction, lowering CPU usage within virtual desktop machines as well as on client endpoint devices.
Blast Extreme Adaptive Transport Side Channel
New support has been added for Blast Extreme Adaptive Transport side channels for USB and CDR communications. Once enabled, TCP port 32111 for USB traffic does not need to be opened, and USB traffic uses a side channel. This feature is supported for both virtual desktops and RDS hosts.
Feature is turned off by default.
Enable the feature through a registry key: HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware Blast\Config\UdpAuxiliaryFlowsEnabled 1
Entitlement Restrictions Based on Machine Name
This feature allows IT administrators to restrict access to published applications and desktops based on both client computer and user. With client restrictions for RDSH, it is now possible to check AD security groups for specific computer names. Users only have access to desktops and apps when both the user and the client machine are entitled. For this release, the feature is supported only for Windows clients and works with global entitlements.
Pre-Launch Improvements
Pre-launch provides the ability to launch an empty (application-less) session when connecting to the Connection Server. The feature is now also available to Windows clients, in addition to macOS.
Also, it is no longer necessary to manually make changes to the client settings. You can configure automatic reconnection.
Blast Extreme Adaptive Transport Mode for iOS and macOS
With prior client releases, users were required to configure their Blast Extreme settings before they connected to the Connection Server. After a connection was established, the options to change the Blast Extreme setting—which included H.264, Poor, Typical, and Excellent—were unavailable.
With this release, users can change the network condition setting from Excellent to Typical or the reverse while inflight to sessions. Doing so also changes the protocol connection type between TCP (for Excellent) and UDP (for Typical).
Note: End users will not be able to change the network condition setting if Poor is selected before establishing a session connection.
Horizon Client for Windows
Horizon Client 4.6 updates include:
Additional command-line options for the new client installer – When silently installing the Windows client, using the /s flag, you can now also set:
REMOVE-SerialPort,Scanner – Removes the serial port, scanner, or both.
DESKTOP_SHORTCUT-0 – Installs without a desktop shortcut.
STARTMENU_SHORTCUT-0 – Installs without a Start menu shortcut.
Support for UNC paths with client drive redirection (CDR):
Allows remote applications to access files from a network location on the client machine. Each location gets its own drive letter inside the remote application or VDI desktop.
Folders residing on UNC paths can now be redirected with CDR, and get their own drive letter inside the session, just as any other shared folder.
Horizon Client for macOS
Horizon Client 4.6 updates include:
Apple macOS High Sierra day 0 support.
Users can select which monitors to use for VDI sessions and which to use for the local system.
Norwegian keyboard support and mappings are now available
Horizon Client for iOS
Horizon Client 4.6 updates include:
iOS 11 support
iOS split keyboard update – Removes the middle area in the split keyboard for a better view of the desktop
New dialog box for easy connection to a Swiftpoint Mouse
Horizon Client for Android
Horizon Client 4.6 updates include:
Android 8.0 Oreo support.
Server URL configuration – Allows administrators to configure a list of Connection Servers and a default Connection Server on Android devices managed by VMware AirWatch.
Android and Chrome OS Client Updates
Horizon Client 4.6 for Android and Horizon Client 4.6 for Chrome OS updates include:
Simple shortcuts – Users can right-click any application or desktop to add a shortcut to the home screen.
Webcam redirection – Integrated webcams on an Android device or a Chromebook are now available for redirection using the Real-Time Audio-Video (RTAV) feature.
HTML Access
HTML Access 4.6 updates include:
HTML Access on Android devices – Though HTML Access has fewer features than the native Horizon Client, it allows you to use remote desktops and published applications without installing software.
HTML Access page customization – Administrators can customize graphics and text and have those customizations persist through future upgrades.
Horizon Client for Linux
Horizon Client 4.6 updates include:
Support for Raspberry Pi 3 Model B devices:
ThinLinx operating system (TLXOS) or Stratodesk NoTouch operating system
Supported Horizon Client features include:
Blast Extreme
USB redirection
264 decoding
8000Hz and 16000Hz audio-in sample rate
RHEL/CentOS 7.4 support
Horizon Client for Windows 10 UWP
Horizon Client 4.6 updates include:
Network recovery improvements – Clients can recover from temporary network loss (up to 2 minutes). This feature was already available for Windows, macOS, Linux, iOS, and Android, and is now available for Windows 10 UWP.
Automatically reconnects Blast Extreme sessions
Reduces re-authentication prompts
We are excited about these new features in Horizon 7.3.1 and the Horizon Client 4.6. We hope that you will give them a try.
You must be logged in to post a comment.