Building NSX-T 3.1 Home lab Step 1

I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.

With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.

In NSX-T 3.11 we got the ability to have the Edge TEP on the same subnet as the hypervisor TEP. A nice write-up of this feature can be found here: https://www.virten.net/2020/11/nsx-t-3-1-enhancement-shared-esxi-and-edge-transport-vlan-with-a-single-uplink/

Lab environment

First let’s have a quick look at the lab environment:

Compute

I have 1 have one ESXi Server Dell Server R730. I use only one nic for Management en Virtual Machine Traffic.

Network

My home network consists of single VLAN

VLAN

Subnet

Role

Virtual Switch

0

192.168.150.0/24

Management/Virtual Machine Traffic

vSwitch0

 Also ensure you enable the required security settings to support nested virtualization:

Virtual Machines

I run a virtualized vSphere 7 Cluster on my host

Afbeelding met tekst

Automatisch gegenereerde beschrijving

The Distributed Virtual Switches are running version 7.0.0 which let’s us deploy NSX-T on the VDS directly.

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Afbeelding met tafel

Automatisch gegenereerde beschrijving

Preparations

Check out the NSX-T Data Center Workflow for vSphere for details and documentation on the process

IP Addresses and DNS records

Before deploying NSX-T in the environment I’ve prepared a few IP addresses and DNS records

Role

IP

NSX Manager

192.168.150.229

NSX-T Edge node 1

192.168.150.227

NSX-T Edge node 2 (currently not in use)

192.168.150.228

NSX-T T0 GW Interface 1

192.168.99.2

Note that I’ve reserved addresses for a second Edge which I’m not going to use at the moment.

Deploy NSX manager appliance

VMware documentation reference

The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.

In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation

Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers

NSX-T deployment

Now let’s get rocking with our NSX-T setup!

We’ll start the NSX manager and prepare it for configuring NSX in the environment

Initial Manager config

After first login I’ll accept the EULA and optionally enable the CEIP

License

Next I’ll add the license.

Add license

Import certificate

Imported certificates

IP Pools

Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)

Afbeelding met tekst, schermafbeelding, monitor

Automatisch gegenereerde beschrijving

TEP pool

Compute Manager

With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.

Best is a specific service account for the connection

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Compute Manager added

Fabric configuration

Now we’re ready for building out our network fabric which will consist of the following:

Transport Zones

Overlay

VLAN

Transport Nodes

ESXi Hosts

Edge VMs

Edge clusters

Take a look at this summary of the Key concepts in NSX-T to learn more about them.

Transport Zone

The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.

The defaults could be used, but I like to create my own.

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Transport Zones

Uplink Profiles

Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.

Again I’m creating my own profile and leave the default profiles be as they are.

Afbeelding met tekst, monitor, zwart, schermafbeelding

Automatisch gegenereerde beschrijving

Uplink profile

In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.

Transport Node Profile

Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host

In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.

We’ll also add in our newly created Transport Zones

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Creating Transport Node profile

We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks

vCenter View

Creating Transport Node profile

Configure NSX on hosts

With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX

Configure cluster for NSX

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Select profile

After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.

Hosts configuring

After a few minutes our hosts should be configured and ready for NSX

Afbeelding met tekst, schermafbeelding, monitor, scherm

Automatisch gegenereerde beschrijving

Hosts configured

Trunk segment

Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).

But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Trunk segment

Edge VM

Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.

Deploy edge VM

Deploy edge VM

Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink

NSX Edge config

Edge cluster

We’ll also create an Edge cluster and add the Edge VM to it

Edge cluster

Summary

Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.

In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.

Hopefully this post can help someone, if not it has at least helped me.

Thanks for reading!

Special thnx for https://rudimartinsen.com/2021/06/29/nsx-t-31-homelab/ for his blog post

VMware vSphere 6.7

VMware is announcing vSphere 6.7, the latest release of the industry-leading virtualization and cloud platform. vSphere 6.7 is the efficient and secure platform for hybrid clouds, fueling digital transformation by delivering simple and efficient management at scale, comprehensive built-in security, a universal application platform, and seamless hybrid cloud experience.

vSphere 6.7 delivers key capabilities to enable IT organizations address the following notable trends that are putting new demands on their IT infrastructure:

  • Explosive growth in quantity and variety of applications, from business critical apps to new intelligent workloads.
  • Rapid growth of hybrid cloud environments and use cases.
  • On-premises data centers growing and expanding globally, including at the Edge.
  • Security of infrastructure and applications attaining paramount importance.

Let’s take a look at some of the key capabilities in vSphere 6.7:

Simple and Efficient Management, at Scale

vSphere 6.7 builds on the technological innovation delivered by vSphere 6.5, and elevates the customer experience to an entirely new level. It provides exceptional management simplicity, operational efficiency, and faster time to market, all at scale.

vSphere 6.7 delivers an exceptional experience for the user with an enhancedvCenter Server Appliance (vCSA). It introduces several new APIs that improve the efficiency and experience to deploy vCenter, to deploy multiple vCenters based on a template, to make management of vCenter Server Appliance significantly easier, as well as for backup and restore. It also significantly simplifies the vCenter Server topology through vCenter with embedded platform services controller in enhanced linked mode, enabling customers to link multiple vCenters and have seamless visibility across the environment without the need for an external platform services controller or load balancers.

Moreover, with vSphere 6.7 vCSA delivers phenomenal performance improvements (all metrics compared at cluster scale limits, versus vSphere 6.5):

  • 2X faster performance in vCenter operations per second
  • 3X reduction in memory usage
  • 3X faster DRS-related operations (e.g. power-on virtual machine)

These performance improvements ensure a blazing fast experience for vSphere users, and deliver significant value, as well as time and cost savings in a variety of use cases, such as VDI, Scale-out apps, Big Data, HPC, DevOps, distributed cloud native apps, etc.

vSphere 6.7 improves efficiency at scale when updating ESXi hosts, significantly reducing maintenance time by eliminating one of two reboots normally required for major version upgrades (Single Reboot). In addition to that, vSphere Quick Boot is a new innovation that restarts the ESXi hypervisor without rebooting the physical host, skipping time-consuming hardware initialization.

Another key component that allows vSphere 6.7 to deliver a simplified and efficient experience is the graphical user interface itself. The HTML5-based vSphere Client provides a modern user interface experience that is both responsive and easy to use. With vSphere 6.7, it includes added functionality to support not only the typical workflows customers need but also other key functionality like managing NSX, vSAN, VUM as well as third-party components.

Comprehensive Built-In Security

vSphere 6.7 builds on the security capabilities in vSphere 6.5 and leverages its unique position as the hypervisor to offer comprehensive security that starts at the core, via an operationally simple policy-driven model.

vSphere 6.7 adds support for Trusted Platform Module (TPM) 2.0 hardware devices and also introduces Virtual TPM 2.0, significantly enhancing protection and assuring integrity for both the hypervisor and the guest operating system. This capability helps prevent VMs and hosts from being tampered with, prevents the loading of unauthorized components and enables guest operating system security features security teams are asking for.

Data encryption was introduced with vSphere 6.5 and very well received.  With vSphere 6.7, VM Encryption is further enhanced and more operationally simple to manage.  vSphere 6.7 simplifies workflows for VM Encryption, designed to protect data at rest and in motion, making it as easy as a right-click while also increasing the security posture of encrypting the VM and giving the user a greater degree of control to protect against unauthorized data access.

vSphere 6.7 also enhances protection for data in motion by enabling encrypted vMotion across different vCenter instances as well as versions, making it easy to securely conduct data center migrations, move data across a hybrid cloud environment (between on-premises and public cloud), or across geographically distributed data centers.

vSphere 6.7 introduces support for the entire range of Microsoft’s Virtualization Based Security technologies. This is a result of close collaboration between VMware and Microsoft to ensure Windows VMs on vSphere support in-guest security features while continuing to run performant and secure on the vSphere platform.

vSphere 6.7 delivers comprehensive built-in security and is the heart of a secure SDDC. It has deep integration and works seamlessly with other VMware products such as vSAN, NSX and vRealize Suite to provide a complete security model for the data center.

Universal Application Platform

vSphere 6.7 is a universal application platform that supports new workloads (including 3D Graphics, Big Data, HPC, Machine Learning, In-Memory, and Cloud-Native) as well as existing mission critical applications. It also supports and leverages some of the latest hardware innovations in the industry, delivering exceptional performance for a variety of workloads.

vSphere 6.7 further enhances the support and capabilities introduced for GPUs through VMware’s collaboration with Nvidia, by virtualizing Nvidia GPUs even for non-VDI and non-general-purpose-computing use cases such as artificial intelligence, machine learning, big data and more. With enhancements to Nvidia GRID™ vGPU technology in vSphere 6.7, instead of having to power off workloads running on GPUs, customers can simply suspend and resume those VMs, allowing for better lifecycle management of the underlying host and significantly reducing disruption for end-users. VMware continues to invest in this area, with the goal of bringing the full vSphere experience to GPUs in future releases.

vSphere 6.7 continues to showcase VMware’s technological leadership and fruitful collaboration with our key partners by adding support for a key industry innovation poised to have a dramatic impact on the landscape, which is persistent memory. With vSphere Persistent Memory, customers using supported hardware modules, such as those available from Dell-EMC and HPE, can leverage them either as super-fast storage with high IOPS, or expose them to the guest operating system as non-volatile memory. This will significantly enhance performance of the OS as well as applications across a variety of use cases, making existing applications faster and more performant and enabling customers to create new high-performance applications that can leverage vSphere Persistent Memory.

Seamless Hybrid Cloud Experience

With the fast adoption of vSphere-based public clouds through VMware Cloud Provider Program partners, VMware Cloud on AWS, as well as other public cloud providers, VMware is committed to delivering a seamless hybrid cloud experience for customers.

vSphere 6.7 introduces vCenter Server Hybrid Linked Mode, which makes it easy and simple for customers to have unified visibility and manageability across an on-premises vSphere environment running on one version and a vSphere-based public cloud environment, such as VMware Cloud on AWS, running on a different version of vSphere. This ensures that the fast pace of innovation and introduction of new capabilities in vSphere-based public clouds does not force the customer to constantly update and upgrade their on-premises vSphere environment.

vSphere 6.7 also introduces Cross-Cloud Cold and Hot Migration, further enhancing the ease of management across and enabling a seamless and non-disruptive hybrid cloud experience for customers.

As virtual machines migrate between different data centers or from an on-premises data center to the cloud and back, they likely move across different CPU types. vSphere 6.7 delivers a new capability that is key for the hybrid cloud, called Per-VM EVC. Per-VM EVC enables the EVC (Enhanced vMotion Compatibility) mode to become an attribute of the VM rather than the specific processor generation it happens to be booted on in the cluster. This allows for seamless migration across different CPUs by persisting the EVC mode per-VM during migrations across clusters and during power cycles.

Previously, vSphere 6.0 introduced provisioning between vCenter instances. This is often called “cross-vCenter provisioning.” The use of two vCenter instances introduces the possibility that the instances are on different release versions. vSphere 6.7 enables customers to use different vCenter versions while allowing cross-vCenter, mixed-version provisioning operations (vMotion, Full Clone and cold migrate) to continue seamlessly. This is especially useful for customers leveraging VMware Cloud on AWS as part of their hybrid cloud.

Learn More

As the ideal, efficient, secure universal platform for hybrid cloud, supporting new and existing applications, serving the needs of IT and the business, vSphere 6.7 reinforces your investment in VMware. vSphere 6.7 is one of the core components of VMware’s SDDC and a fundamental building block of your cloud strategy. With vSphere 6.7, you can now run, manage, connect, and secure your applications in a common operating environment, across your hybrid cloud.

This article only touched upon the key highlights of this release, but there are many more new features. To learn more about vSphere 6.7, please see the following resources.

Exchange 2016 Cumulative Update 9 and Exchange 2013 Cumulative Update 20

On March 20, 2018 Microsoft has released two new quarterly updates:

  • Exchange 2016 Cumulative Update 9 (CU9)
  • Exchange 2013 Cumulative Update 20 (CU20)
TLS 1.2

There aren’t too many new features in these CUs. The most important ‘feature’ is that TLS 1.2 is now fully supported (most likely you already have TLS 1.2 only on your load balancer). This is extremely supported since Microsoft will support TLS 1.2 ONLY in Office 365 in the last quarter of this year (see the An Update on Office 365 Requiring TLS 1.2 Microsoft blog as well).

Dot.net Support

Support for .NET Framework 4.7.1, or the ongoing story about the .NET Framework. The .NET Framework 4.7.1 is fully supported by Exchange 2016 CU9 and Exchange 2013 CU20. Why is this important? For the upcoming CUs in three months (somewhere in June 2018) the .NET Framework 4.7.1 is mandatory, so you need these to be installed in order to install these upcoming CUs.

Please note that .NET Framework 4.7 is NOT supported!

If you are currently running an older CU of Exchange, for example Exchange 2013 CU12, you have to make an intermediate upgrade to Exchange 2013 CU15. Then upgrade to .NET Framework 4.6.2 and then upgrade to Exchange 2013 CU20. If you are running Exchange 2016 CU3 or CU4, you can upgrade to .NET Framework 4.6.2 and then upgrade to Exchange 2016 CU9.

Schema changes

If you are coming from a recent Exchange 2013 CU, there are no schema changes since the schema version (rangeUpper = 15312) hasn’t changed since Exchange 2013 CU7. However, since there can be changes in (for example) RBAC, it’s always a good practice to run the Setup.exe /PrepareAD command. For Exchange 2016, the schema version (rangeUpper = 15332) hasn’t changed since Exchange 2016 CU7.

As always, check the new CUs in your lab environment before installing into your production environment!!

Exchange 2016 CU9 Information and download Links
Exchange 2013 CU20 Information and download Links

Exchange Server 2013 enters the Extended Support phase of product lifecycle on April 10th, 2018. During Extended Support, products receive only updates defined as Critical consistent with the Security Update Guide. For Exchange Server 2013, critical updates will include any required product updates due to time zone definition changes.

New Training Platform: Learn @ KEMP

Ward Vissers Blogging About Microsoft Exchange VMware and other interresting things about ICT

Your gateway to becoming proficient in all things KEMP is here! We have recently launched our Learn @ KEMP Training Portal, making it easy for you to:

• Learn about KEMP’s Series of Load Balancers.
• Get certified at all levels ranging from Sales to Advanced Technical Training.
• Explore our wealth of resources, from our “Expert Series” webinars to detailed configuration templates.
• Engage with Support & Sales through chat, community forums, blogs, social media or just regular email.

Start your learning journey today! Register for Learn @ KEMP

Once you achieve certification at any level, you will see your Badge Status update in real time, and have access to your certificate in the “My Account” section. Moreover, you can share the news of you becoming KEMP certified on LinkedIn, Facebook, Twitter etc.

Learn at KEMP Training

If you are supporting, designing, implementing, configuring or managing a KEMP LoadMaster load balancer, consider making the KEMP Certified Engineer (KCE) your immediate certification goal.

However, if you are in sales and need to know just the basics you should aim to complete our Essentials training course and achieve your KEMP Certified Salesperson badge and certificate.

For the best learning experience, the Learn at KEMP training is structured so that you complete each course level and achieve your certifications, starting off with Essentials, before you move on to the next level.

Could you be one of the select few to achieve the GOLD standard of KEMP Certified Master?

Kemp LoadMaster for Free Awesome!! A Free Load Balancer for Any Workload

KEMP gives away the LoadMaster for free. Now the virtual appliance is available in a free edition too. Available for all supported hypervisors (VMware, Hyper-V, enz).

The free VLM has some limitations, for instance the HA setup with an active and hot stand-by unit is not supported. Another important limitation is that the free LoadMaster doesn’t come with the awesome support paying customers receive. Also there are some bandwidth and SSL TPS limitations, all in all not much special for most home, lab, testing and other non-production deployments.

    The Free LoadMaster Includes:

  • Layer 4/7 load balancing
  • Content switching
  • Caching, compression engine
  • MS Exchange 2010/2013 optimized Smile
  • Pre-configured virtual service templates Smile
  • IPS engine
  • High Availability
  • Edge Security Pack (ESP) – a Microsoft TMG replacement Smile
  • GSLB multi-site load balancing
  • RESTful API
Translate »