With all the Fabric configuration done we can test our setup.
I’m creating two overlay segments in NSX connected to a Tier-1 gateway, and after that we’ll create a Tier-0 gateway and connect the T1 gateway to it to get North/South connectivity to the overlay resources
Two VMs will be deployed, one VM in each of the two overlay segments
Create a Tier-1 Gateway
The Tier-1 Gateway will initially not be connected to a Tier-0 Gateway (I haven’t configured a T0 gw yet) or an Edge Cluster.
Tier-1 Gateway
Create Logical segments
We need two logical segments, both using the Overlay Transport Zone. I’m defining different subnets on them, 10.0.1.0/24 and 10.0.2.0/24.
Segments
Add VMs to Logical segments
We have two Photon VMs which should be added to the logical segments.
Two Photon VMs
Test connectivity
Now let’s verify that the two VMs can ping each other
Don’t forget to enable the echo rule on the Windows Firewall….
Connectivity test
This shows that the overlay is working, and note again that the Edge VMs are not in use here.
External connectivity
Traffic is flowing between VMs running on Logical segments inside the NSX-T environment, but what if we want to reach something outside, or reach a VM inside a NSX-T overlay?
Then we need to bring a Tier-0 Gateway in to the mix.
The T-0 gateway can be configured with Uplinks that are connected to the physical network. This is done through a segment which can reach the physical network, normally through a VLAN.
To configure the uplink interfaces we need to have Edge VMs so finally we get to bring those into play as well.
Create segment for uplinks
First I’ll create a segment mapped to VLAN 99 in my lab. Note that I select the VLAN transport zone, and I do not connect the segment to a gateway
Create Uplink VLAN segment
Create Tier-0 gateway
Now we’ll create a Tier-0 gateway, note that I now also select my Edge cluster.
Create T0 gateway
Static route
To be able to forward traffic out of the NSX-T environment the T0 gateway needs to know where to send queries for IPs it doesn’t control. Normally you would want to configure a routing protocol like BGP or OSPF so that the T0 gateway could exchange routes with the physical router(s) in your network.
I’ve not set up BGP or any other routing protocol on my physical router, so I’ve just configured a default static route that forwards to my physical router. The next hop is set to the gateway address for the Uplink VLAN 99, 192.168.99.1
Static route
Link T1 gateway to T0 gateway
We’ve done a lot of configuring now, but still we’ve not got connectivity in or out for our VMs. The final step is to connect the Tier-1 gateway to the Tier-0 gateway, and we’ll also activate Route Advertisement of Connected Segments and Service Ports
Tier-1 Gateway
Test connectivity
Verify North/South connectivity
Yes!
Test Distributed Firewall
Let’s also do a quick test of the Distributed Firewall feature in NSX-T.
First we’ll create a rule blocking ICMP (ping) from any to my test vm and publish the rule
ICMP firewall rule
Now let’s test pinging from from my pc to nested Windows 2016 server. With the rule not enabled en enabled.
Ping blocked
Summary
Hopefully this post can help someone, if not it has at least helped me.
Now we have working environment so we can go testing some things.
Also scripting/automation against a nsx environment I will look in to!
I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.
With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.
The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.
In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation
Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers
NSX-T deployment
Now let’s get rocking with our NSX-T setup!
We’ll start the NSX manager and prepare it for configuring NSX in the environment
Initial Manager config
After first login I’ll accept the EULA and optionally enable the CEIP
License
Next I’ll add the license.
Add license
Import certificate
IP Pools
Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)
TEP pool
Compute Manager
With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.
Best is a specific service account for the connection
Compute Manager added
Fabric configuration
Now we’re ready for building out our network fabric which will consist of the following:
Transport Zones
Overlay
VLAN
Transport Nodes
ESXi Hosts
Edge VMs
Edge clusters
Take a look at this summary of the Key concepts in NSX-T to learn more about them.
Transport Zone
The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.
The defaults could be used, but I like to create my own.
Transport Zones
Uplink Profiles
Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.
Again I’m creating my own profile and leave the default profiles be as they are.
Uplink profile
In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.
Transport Node Profile
Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host
In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.
We’ll also add in our newly created Transport Zones
Creating Transport Node profile
We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks
vCenter View
Creating Transport Node profile
Configure NSX on hosts
With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX
Configure cluster for NSX
Select profile
After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.
Hosts configuring
After a few minutes our hosts should be configured and ready for NSX
Hosts configured
Trunk segment
Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).
But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs
Trunk segment
Edge VM
Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.
Deploy edge VM
Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink
NSX Edge config
Edge cluster
We’ll also create an Edge cluster and add the Edge VM to it
Edge cluster
Summary
Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.
In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.
Hopefully this post can help someone, if not it has at least helped me.
In March 2020, Microsoft is going to release a update which will essentially disable the use of unsigned LDAP which will be the default. This means that you can no longer use bindings or services which binds to domain controllers over unsigned ldap on port 389. You can either use LDAPS over port 636 or using StartTLS on port 389 but it still requires that you addd a certificate to your domain controllers. This hardening can be done manually until the release of the security update that will enable these settings by default.
How to add signed LDAPS to your domain controllers
After the change the following features will be supported against Active Directory.
How will this affect my enviroment?
Clients that rely on unsigned SASL (Negotiate, Kerberos, NTLM, or Digest) LDAP binds or on LDAP simple binds over a non-SSL/TLS connection stop working after you make this configuration change. This also applies for 3.party solutions which rely on LDAP such as Citrix NetScaler/ADC or other Network appliances, Vault and or authentication mechanisms also rely on LDAP. If you haven’t fixed this it will stop working. This update will apply for all versions.
Windows Server 2008 SP2, Windows 7 SP1, Windows Server 2008 R2 SP1, Windows Server 2012, Windows 8.1, Windows Server 2012 R2, Windows 10 1507, Windows Server 2016, Windows 10 1607, Windows 10 1703, Windows 10 1709, Windows 10 1803, Windows 10 1809, Windows Server 2019, Windows 10 1903, Windows 10 1909
How to check if something is using unsigned LDAP?
If the directory server is configured to reject unsigned SASL LDAP binds or LDAP simple binds over a non-SSL/TLS connection, the directory server will log a summary under eventid 2888 one time every 24 hours when such bind attempts occur. Microsoft advises administrators to enable LDAP channel binding and LDAP signing as soon as possible before March 2020 to find and fix any operating systems, applications or intermediate device compatibility issues in their environment.
I must also move al lot of VM’s from different datacenters to other datacenters. I use the script from Michael Wilmsen to move the VM’s. But along the way I counter some problems with this script. So I begon tweaking and tweaking and tweaking this script to create for me the ultimate Cross vCenter PowerCLI Script.
Coolfeatures: – Info through Whattsapp (Default not enabled) – Dryrun (Test Run) – Logging – Selection through GUI – Multiple Nic support maximum of 4. – Datastore en Host selection based on Free space en Free Memory – Check of Destination Host or Datastore in Maintance – Destination Store exist in Destination Cluster
MoveVM.ps1: #Filename: MoveVM.ps1
#Author: M. Wilmsen / W. Vissers
#Source: http://virtual-hike.com/nlvmug-2018/
#Version: 2.0
#Date: 21-10-2018
#ChangeLog:
# V0.9 – M. Wilmsen First Version
# V1.0 – Fixed Multiple Nics to maximium of 4 nics
# – Logfile name VM name
# V1.1 – Destination Cluster not the first Host
# V1.2 – Selected Destination host based on memory used
# V1.3 – Fixed folder location and VirtualPortGroup
# V1.4 – Fixed Datastore in Maintance
# V1.5 – Using Get-VICredentialStoreItem + Logpath Fixt
# V1.6 – Fixed Log in Hours in 24 uurs
# V1.7 – Fixed Using DatastoreCluster name based on Cluster name!
# V1.8 – Check if Destination has the same datastore
# – Ask know for input
# – VM selection with VMhost
# – Fixed Ping Check
# v1.9 – Added Destination Store exist in Destination Cluster
# v2.0 – Fixed Destination Store exist in Destination Cluster
<#
.SYNOPSIS
Script to migrate a virtual machine
.DESCRIPTION
Script to migrate compute and storage from cluster to cluster. Log will be in current dir [VM]-[-timestamp].log
.EXAMPLE
MoveVM.ps1
#>
################################## INIT #################################################
#Set WebOperation timeout
# set-PowerCLIConfiguration -WebOperationTimeoutSeconds 3600
#Define Global variables
$location = “D:\xmovewhattsapp”
$LogPath = “.\”
$DataStoreClusterPrefix = “SAN-“
$SourceVC = Read-Host “Give Source vCenter”
$DestinationVC = Read-Host “Give Destination vCenter”
$DRSRecommendation = $true
$Dryrun = $false
$SendWhatsApp = $false
$WhatsAppNumbers = “0123456789”
$WhatsAppGroup = “Namehireyourwhattsgroup”
$instanceId = “23” #chang this line
$clientId = “demo@demo.nl” #change this line
$clientSecret = “Puthiersecretid” #change this line
################################## PASSWORD STORE ##############################################
#Username
# Check if credentials exist in credential store if not ask for credentials and put them in credential store
if ($DatastoreExistinOthervCenter ) { LogWrite “Datastore exsist $DestinationCluster in destination vCenter $DestinationVC “ $destinationDatastore = $DatastoreExistinOthervCenter } Else { LogWrite “Datastore does not exsist in $DestinationCluster destination vCenter $DestinationVC” # Select DataStore with the most free space and not in maintance $DatastoreCluster = “$DataStoreClusterPrefix”+”$DestinationCluster” $destinationDatastore = Get-DatastoreCluster $DatastoreCluster | Get-Datastore | Where {$_.State -ne “Maintenance”} | Sort-Object -Property FreeSpaceGB -Descending | Select-Object -First 1 }
LogWrite “Start move: $vm” Logwrite “VM IP: $vmip” Logwrite “VM Disk Used (GB): $VMHDDSize” Logwrite “VM Folder: $vmfolder” Logwrite “Source vCenter: $SourceVC” Logwrite “VM Source Cluster: $SourceCluster” Logwrite “Destination vCenter: $DestinationVC” Logwrite “VM Destination Cluster: $DestinationCluster” Logwrite “Destination host: $DestinationHost” LogWrite “VM Source PortGroup: $SourceVMPortGroup” LogWrite “VM Destination Portgroup: $DestinationVMPortgroup” Logwrite “VM Destination Datastore: $destinationDatastore” LogWrite “Destination Datastore FreeSpace GB: $destinationDatastoreFreeSpace “ if ( $Dryrun ) { $FreespaceAfterMigration = $destinationDatastoreFreeSpace – $VMHDDSize if ( $FreespaceAfterMigration -lt 0 ) { Logwrite “ERROR: Datastore $destinationDatastore does not have sufficient freespace! Virtual Machine needs $VMHDDSize. Only $destinationDatastoreFreeSpace available.” } else { Logwrite “Virtual Machine will fit on datastore $destinationDatastore. Freespace after migration is: $FreespaceAfterMigration GB” } } #Test if VM responsed to ping if ($vmip -eq $null) { LogWrite “Virtual Machine ip address not known” Logwrite “No ping check will be performed after moving the Virtual Machine” } else { Test-Connection -comp $vmip -quiet LogWrite “Virtual Machine $vm response to ping before being moved. Virtual machine will be checked after being moved” $PingVM = $true }
#if ( $VMHDDSize -eq if ( -NOT $Dryrun) { #Migrate VM to cluster LogWrite “Move $vm to vCenter $DestinationVC and datastore $DestinationDatastore” Try { $Result = Move-VM -VM $vm ` -Destination $DestinationHost ` -Datastore $DestinationDatastore ` -NetworkAdapter $NetworkAdapter ` -PortGroup $DestinationVMPortgroup ` -ErrorAction Stop } Catch { $ErrorMessage = $_.Exception.Message LogWrite “ERROR: Move of $vm to cluster $DestinationHost failed!!!” Logwrite “ERROR: Move Status Code: $ErrorMessage” SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage” $MigError = $true } #Migrate VM to folder LogWrite “Move $vm to vCenter $vmfolder” Try { $VMtemp = get-vm $vm $Result = Move-VM -VM $vmtemp -InventoryLocation $vmfolder -ErrorAction Stop } Catch { $ErrorMessage = $_.Exception.Message LogWrite “ERROR: Move of $vm to folder $vmfolder failed!!!” Logwrite “ERROR: Move Status Code: $ErrorMessage” SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage” $MigError = $true } }
$MigError = $false #Test if VM is running on destination cluster if ( -NOT $MigError -AND -NOT $Dryrun ) { LogWrite “Check $vm is registered in $DestinationVC” try { $CheckVM = get-vm -name $vm -server $DestinationVC -ErrorAction Stop
if ( $CheckVM ) { Logwrite “$vm registered in $DestinationVC” } else { Logwrite “ERROR: $vm not found in $DestinationVC” } } catch { $ErrorMessage = $_.Exception.Message Logwrite “ERROR: $vm not found in $DestinationVC” Logwrite “ERROR: $ErrorMessage” SendWhatsApp “ERROR move: $vm not found in $DestinationVC” } } #Test is VM response to ping, if $PingVM = $True if ($PingVM) { if (Test-Connection -comp $vmip -quiet) { LogWrite “Virtual Machine $vm response to ping after move” SendWhatsApp “Virtual Machine $vm response to ping after move” } } sleep 1 SendWhatsApp “Finished move action: $vm from $SourceVC to $DestinationVC” Logwrite “Finished move action: $vm from $SourceVC to $DestinationVC”
I love powershell. I created a little script to deploy multi VM based on a Windows Template throug CSV file.
It’s create a computer account at the specfified ou. He greates also a Domain Local Group for management. (It used in the customization not specified here)
These products are not compatible with vSphere 6.7 at this time:
VMware NSX
VMware Integrated OpenStack (VIO)
VMware vSphere Integrated Containers (VIC)
VMware Horizon
Environments with these products should not be upgraded to vSphere 6.7 at this time. This article and the VMware Product Interoperability Matrixes will be updated when a compatible release is available.
Upgrade Considerations
Before upgrading your environment to vSphere 6.7, review these critical articles to ensure a successful upgrade For vSphere
It is not possible to upgrade directly from vSphere 5.5 to vSphere 6.7.
Upgrades to vSphere 6.7 are only possible from vSphere 6.0 or vSphere 6.5. If you are currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.
VMware is announcing vSphere 6.7, the latest release of the industry-leading virtualization and cloud platform. vSphere 6.7 is the efficient and secure platform for hybrid clouds, fueling digital transformation by delivering simple and efficient management at scale, comprehensive built-in security, a universal application platform, and seamless hybrid cloud experience.
vSphere 6.7 delivers key capabilities to enable IT organizations address the following notable trends that are putting new demands on their IT infrastructure:
Explosive growth in quantity and variety of applications, from business critical apps to new intelligent workloads.
Rapid growth of hybrid cloud environments and use cases.
On-premises data centers growing and expanding globally, including at the Edge.
Security of infrastructure and applications attaining paramount importance.
Let’s take a look at some of the key capabilities in vSphere 6.7:
Simple and Efficient Management, at Scale
vSphere 6.7 builds on the technological innovation delivered by vSphere 6.5, and elevates the customer experience to an entirely new level. It provides exceptional management simplicity, operational efficiency, and faster time to market, all at scale.
vSphere 6.7 delivers an exceptional experience for the user with an enhancedvCenter Server Appliance (vCSA). It introduces several new APIs that improve the efficiency and experience to deploy vCenter, to deploy multiple vCenters based on a template, to make management of vCenter Server Appliance significantly easier, as well as for backup and restore. It also significantly simplifies the vCenter Server topology through vCenter with embedded platform services controller in enhanced linked mode, enabling customers to link multiple vCenters and have seamless visibility across the environment without the need for an external platform services controller or load balancers.
Moreover, with vSphere 6.7 vCSA delivers phenomenal performance improvements (all metrics compared at cluster scale limits, versus vSphere 6.5):
2X faster performance in vCenter operations per second
These performance improvements ensure a blazing fast experience for vSphere users, and deliver significant value, as well as time and cost savings in a variety of use cases, such as VDI, Scale-out apps, Big Data, HPC, DevOps, distributed cloud native apps, etc.
vSphere 6.7 improves efficiency at scale when updating ESXi hosts, significantly reducing maintenance time by eliminating one of two reboots normally required for major version upgrades (Single Reboot). In addition to that, vSphere Quick Boot is a new innovation that restarts the ESXi hypervisor without rebooting the physical host, skipping time-consuming hardware initialization.
Another key component that allows vSphere 6.7 to deliver a simplified and efficient experience is the graphical user interface itself. The HTML5-based vSphere Client provides a modern user interface experience that is both responsive and easy to use. With vSphere 6.7, it includes added functionality to support not only the typical workflows customers need but also other key functionality like managing NSX, vSAN, VUM as well as third-party components.
Comprehensive Built-In Security
vSphere 6.7 builds on the security capabilities in vSphere 6.5 and leverages its unique position as the hypervisor to offer comprehensive security that starts at the core, via an operationally simple policy-driven model.
vSphere 6.7 adds support for Trusted Platform Module (TPM) 2.0 hardware devices and also introduces Virtual TPM 2.0, significantly enhancing protection and assuring integrity for both the hypervisor and the guest operating system. This capability helps prevent VMs and hosts from being tampered with, prevents the loading of unauthorized components and enables guest operating system security features security teams are asking for.
Data encryption was introduced with vSphere 6.5 and very well received. With vSphere 6.7, VM Encryption is further enhanced and more operationally simple to manage. vSphere 6.7 simplifies workflows for VM Encryption, designed to protect data at rest and in motion, making it as easy as a right-click while also increasing the security posture of encrypting the VM and giving the user a greater degree of control to protect against unauthorized data access.
vSphere 6.7 also enhances protection for data in motion by enabling encrypted vMotion across different vCenterinstances as well as versions, making it easy to securely conduct data center migrations, move data across a hybrid cloud environment (between on-premises and public cloud), or across geographically distributed data centers.
vSphere 6.7 introduces support for the entire range of Microsoft’s Virtualization Based Security technologies. This is a result of close collaboration between VMware and Microsoft to ensure Windows VMs on vSphere support in-guest security features while continuing to run performant and secure on the vSphere platform.
vSphere 6.7 delivers comprehensive built-in security and is the heart of a secure SDDC. It has deep integration and works seamlessly with other VMware products such as vSAN, NSX and vRealize Suite to provide a complete security model for the data center.
Universal Application Platform
vSphere 6.7 is a universal application platform that supports new workloads (including 3D Graphics, Big Data, HPC, Machine Learning, In-Memory, and Cloud-Native) as well as existing mission critical applications. It also supports and leverages some of the latest hardware innovations in the industry, delivering exceptional performance for a variety of workloads.
vSphere 6.7 further enhances the support and capabilities introduced for GPUs through VMware’s collaboration with Nvidia, by virtualizing Nvidia GPUs even for non-VDI and non-general-purpose-computing use cases such as artificial intelligence, machine learning, big data and more. With enhancements to Nvidia GRID™ vGPU technology in vSphere 6.7, instead of having to power off workloads running on GPUs, customers can simply suspend and resume those VMs, allowing for better lifecycle management of the underlying host and significantly reducing disruption for end-users. VMware continues to invest in this area, with the goal of bringing the full vSphere experience to GPUs in future releases.
vSphere 6.7 continues to showcase VMware’s technological leadership and fruitful collaboration with our key partners by adding support for a key industry innovation poised to have a dramatic impact on the landscape, which is persistent memory. With vSphere Persistent Memory, customers using supported hardware modules, such as those available from Dell-EMC and HPE, can leverage them either as super-fast storage with high IOPS, or expose them to the guest operating system as non-volatile memory. This will significantly enhance performance of the OS as well as applications across a variety of use cases, making existing applications faster and more performant and enabling customers to create new high-performance applications that can leverage vSphere Persistent Memory.
Seamless Hybrid Cloud Experience
With the fast adoption of vSphere-based public clouds through VMware Cloud Provider Program partners, VMware Cloud on AWS, as well as other public cloud providers, VMware is committed to delivering a seamless hybrid cloud experience for customers.
vSphere 6.7 introduces vCenter Server Hybrid Linked Mode, which makes it easy and simple for customers to have unified visibility and manageability across an on-premises vSphere environment running on one version and a vSphere-based public cloud environment, such as VMware Cloud on AWS, running on a different version of vSphere. This ensures that the fast pace of innovation and introduction of new capabilities in vSphere-based public clouds does not force the customer to constantly update and upgrade their on-premises vSphere environment.
vSphere 6.7 also introduces Cross-Cloud Cold and Hot Migration, further enhancing the ease of management across and enabling a seamless and non-disruptive hybrid cloud experience for customers.
As virtual machines migrate between different data centers or from an on-premises data center to the cloud and back, they likely move across different CPU types. vSphere 6.7 delivers a new capability that is key for the hybrid cloud, called Per-VM EVC. Per-VM EVC enables the EVC (Enhanced vMotion Compatibility) mode to become an attribute of the VM rather than the specific processor generation it happens to be booted on in the cluster. This allows for seamless migration across different CPUs by persisting the EVC mode per-VM during migrations across clusters and during power cycles.
Previously, vSphere 6.0 introduced provisioning between vCenter instances. This is often called “cross-vCenter provisioning.” The use of two vCenter instances introduces the possibility that the instances are on different release versions. vSphere 6.7 enables customers to use different vCenter versions while allowing cross-vCenter, mixed-version provisioning operations (vMotion, Full Clone and cold migrate) to continue seamlessly. This is especially useful for customers leveraging VMware Cloud on AWS as part of their hybrid cloud.
Learn More
As the ideal, efficient, secure universal platform for hybrid cloud, supporting new and existing applications, serving the needs of IT and the business, vSphere 6.7 reinforces your investment in VMware. vSphere 6.7 is one of the core components of VMware’s SDDC and a fundamental building block of your cloud strategy. With vSphere 6.7, you can now run, manage, connect, and secure your applications in a common operating environment, across your hybrid cloud.
This article only touched upon the key highlights of this release, but there are many more new features. To learn more about vSphere 6.7, please see the following resources.
You must be logged in to post a comment.