HyperV Monitor 1.8 Released

HyperV_Mon screen shot

HyperV_Mon is my free tool for monitoring and diagnosing performance issues with Hyper-V.

After prompting from a user with a lot more processors then the test systems here, I updated the HyperV_Mon tool to handle resizing in general, and to adjust the root partition processor display.  This new version is a minor release (1.8.0).

In the works will be a version 2.0 that will look at 2008 R2 with SP1 changes.  This should come out in Q1 of next year.

Download it HERE

Exchange 2010 Tested Solutions

image_thumb

Microsoft provides some documentation examples of well-designed, cost-effective Exchange 2010 solutions deployed on hardware offered by some partners from Microsoft.

9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions

16000 Mailboxes in a Single Site Deployed on IBM and Brocade Hardware 

500 Mailboxes in a Single Site Running Hyper-V on Dell Servers

It’s really nice info to read Smile if you designing a Exchange 2010 Solution for your company or customer.

VMMUpdate Script to Check if all Hyper-V hosts & SCVMM Server are Up to Date

Jonathan has created a nice script. This script checks witch updates are missing from Hyper-V hosts and SCVMM Server.

What updates?

Updates are regularly released for SCVMM Server, Hosts, and the Admin Console. These updates must be applied to all Hosts no matter how many you have. Updates are also released for technologies SCVMM leverages:

  • Windows
  • Hyper-V
  • Failover Cluster

As well as components SCVMM cannot function without:

  • WinRM
  • BITS
  • WMI
  • VDS
  • VSS

The difficulty is in making sure all systems are fully updated. This is a time-consuming task.

WSUS takes care of this for me…

Not necessarily. There are certain Hotfixes that need to be downloaded manually, but for the most part Windows Update is the key. WSUS is Microsoft’s solution to distributing Windows Updates within an enterprise, and this pulls from Windows Update as well. Unfortunately, rules in WSUS are sometimes set up such that all updates required do not find their way to SCVMM systems. So, there are layers of complexity in keeping systems up to date.

Prevent problems with VMMUpdate

With this script you now or that your Hyper-V hosts & SCVMM Server are up to date.

To download the latest follow the link HERE

vmmupdate

System Center Data Protection Manager 2010 Monitoring Management Pack

The management pack monitors the health status of System Center Data Protection Manager 2010 and its components. It alerts the admin on critical health state and it provides certain break fix tasks to take corrective actions.

The following alerts are new in this release of the DPM 2010 Management Pack:

  • Backup metadata enumeration failed
  • Agent ownership required
  • Replica allocated and initial replication scheduled
  • Share path changed
  • Duplicate disks detected
  • VHD parent locator fix-up failed
  • Virtual machine metadata enumeration failed
  • VHD parent locator fix-up canceled
  • SharePoint Item Level Catalog failed
  • Backup without writer metadata
  • Customer Feedback opt-in
  • Backup SLA failed
  • Hyper-V Recovery Success
  • Global DPMDB Database Not Accessible alert notification
  • StagingAreaRestore in-progress
  • StagingAreaRestore success
  • StagingAreaRestore partial success
  • StagingAreaRestore failure
  • Auto Instance Protection failed
  • DPM Online Recovery Point creation failures
  • DPM Online Cache volume is missing
  • Partial Backup success
  • Library devices were disabled

You can download the System Center Data Protection Manager 2010 Monitoring Management Pack HERE

Data Protection Manager 2010 Operations Guide

DPM2010Logo

Microsoft released a nice manual for monitoring and managing DPM servers and tape libraries, and protected computers that are running Microsoft Exchange Server, Microsoft SQL Server, Windows SharePoint Services, Microsoft Virtual Server, or the Hyper-V role in Windows Server 2008 or Windows Server 2008 R2. This guide also provides instructions for setting up protection of data on desktop computers that are connected to the network, and portable computers that are connected to the network intermittently, and for setting up disaster recovery.

Download the Manual

Storage Calculators for System Center Data Protection Manager 2010

Microsoft has released some new sizing calculators for DPM 2010.

DPM_2010_Storage_Calculator_for_Exchange_2010.xlsx

DPM_2010_Storage_Calculator_for_Hyper-V.xlsx

DPM_2010_Storage_Calculator_for_SharePoint.xlsx

VMware vSphere 4.1 Released

WHAT’S NEW:

Installation and Deployment

Storage

  • Boot from SAN. vSphere 4.1 enables ESXi boot from SAN (BFN). iSCSI, FCoE, and Fibre Channel boot are supported. Refer to the Hardware Compatibility Guide for the latest list of NICs and Converged Adapters that are supported with iSCSI boot. See the iSCSI SAN Configuration Guide and the Fibre Channel SAN Configuration Guide.
  • Hardware Acceleration with vStorage APIs for Array Integration (VAAI). ESX can offload specific storage operations to compliant storage hardware. With storage hardware assistance, ESX performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. See the ESX Configuration Guide and the ESXi Configuration Guide.
  • Storage Performance Statistics. vSphere 4.1 offers enhanced visibility into storage throughput and latency of hosts and virtual machines, and aids in troubleshooting storage performance issues. NFS statistics are now available in vCenter Server performance charts, as well as esxtop. New VMDK and datastore statistics are included. All statistics are available through the vSphere SDK. See the vSphere Datacenter Administration Guide.
  • Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion. See the vSphere Resource Management Guide.
  • iSCSI Hardware Offloads. vSphere 4.1 enables 10Gb iSCSI hardware offloads (Broadcom 57711) and 1Gb iSCSI hardware offloads (Broadcom 5709). See the ESX Configuration Guide, the ESXi Configuration Guide, and the iSCSI SAN Configuration Guide.
  • NFS Performance Enhancements. Networking performance for NFS has been optimized to improve throughput and reduce CPU usage. See the ESX Configuration Guide and the ESXi Configuration Guide.

Network

Availability

  • Windows Failover Clustering with VMware HA. Clustered Virtual Machines that utilize Windows Failover Clustering/Microsoft Cluster Service are now fully supported in conjunction with VMware HA. See Setup for Failover Clustering and Microsoft Cluster Service.
  • VMware HA Scalability Improvements. VMware HA has the same limits for virtual machines per host, hosts per cluster, and virtual machines per cluster as vSphere. See Configuration Maximums for VMware vSphere 4.1 for details about the limitations for this release.
  • VMware HA Healthcheck and Operational Status. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster. See the vSphere Availability Guide.
  • VMware Fault Tolerance (FT) Enhancements. vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT-enabled virtual machines are deployed in a cluster, allowing for cluster maintenance operations without turning off FT. See the vSphere Availability Guide.
  • DRS Interoperability for VMware HA and Fault Tolerance (FT). FT-enabled virtual machines can take advantage of DRS functionality for load balancing and initial placement. In addition, VMware HA and DRS are tightly integrated, which allows VMware HA to restart virtual machines in more situations. See the vSphere Availability Guide.
  • Enhanced Network Logging Performance. Fault Tolerance (FT) network logging performance allows improved throughput and reduced CPU usage. In addition, you can use vmxnet3 vNICs in FT-enabled virtual machines. See the vSphere Availability Guide.
  • Concurrent VMware Data Recovery Sessions. vSphere 4.1 provides the ability to concurrently manage multiple VMware Data Recovery appliances. See the VMware Data Recovery Administration Guide.
  • vStorage APIs for Data Protection (VADP) Enhancements. VADP now offers VSS quiescing support for Windows Server 2008 and Windows Server 2008 R2 servers. This enables application-consistent backup and restore operations for Windows Server 2008 and Windows Server 2008 R2 applications.

Management

  • vCLI Enhancements. vCLI adds options for SCSI, VAAI, network, and virtual machine control, including the ability to terminate an unresponsive virtual machine. In addition, vSphere 4.1 provides controls that allow you to log vCLI activity. See the vSphere Command-Line Interface Installation and Scripting Guide and the vSphere Command-Line Interface Reference.
  • Lockdown Mode Enhancements. VMware ESXi 4.1 lockdown mode allows the administrator to tightly restrict access to the ESXi Direct Console User Interface (DCUI) and Tech Support Mode (TSM). When lockdown mode is enabled, DCUI access is restricted to the root user, while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter Server. Direct access to the host using the vSphere Client is not permitted. See the ESXi Configuration Guide.
  • Access Virtual Machine Serial Ports Over the Network. You can redirect virtual machine serial ports over a standard network link in vSphere 4.1. This enables solutions such as third-party virtual serial port concentrators for virtual machine serial console management or monitoring. See the vSphere Virtual Machine Administration Guide.
  • vCenter Converter Hyper-V Import. vCenter Converter allows users to point to a Hyper-V machine. Converter displays the virtual machines running on the Hyper-V system, and users can select a powered-off virtual machine to import to a VMware destination. See the vCenter Converter Installation and Administration Guide.
  • Enhancements to Host Profiles. You can use Host Profiles to roll out administrator password changes in vSphere 4.1. Enhancements also include improved Cisco Nexus 1000V support and PCI device ordering configuration. See the ESX Configuration Guide and the ESXi Configuration Guide.
  • Unattended Authentication in vSphere Management Assistant (vMA). vMA 4.1 offers improved authentication capability, including integration with Active Directory and commands to configure the connection. See VMware vSphere Management Assistant.
  • Updated Deployment Environment in vSphere Management Assistant (vMA). The updated deployment environment in vMA 4.1 is fully compatible with vMA 4.0. A significant change is the transition from RHEL to CentOS. See VMware vSphere Management Assistant.
  • vCenter Orchestrator 64-bit Support. vCenter Orchestrator 4.1 provides a client and server for 64-bit installations, with an optional 32-bit client. The performance of the Orchestrator server on 64-bit installations is greatly enhanced, as compared to running the server on a 32-bit machine. See the vCenter Orchestrator Installation and Configuration Guide.
  • Improved Support for Handling Recalled Patches in vCenter Update Manager. Update Manager 4.1 immediately sends critical notifications about recalled ESX and related patches. In addition, Update Manager prevents you from installing a recalled patch that you might have already downloaded. This feature also helps you identify hosts where recalled patches might already be installed. See the vCenter Update Manager Installation and Administration Guide.
  • License Reporting Manager. The License Reporting Manager provides a centralized interface for all license keys for vSphere 4.1 products in a virtual IT infrastructure and their respective usage. You can view and generate reports on license keys and usage for different time periods with the License Reporting Manager. A historical record of the utilization per license key is maintained in the vCenter Server database. See the vSphere Datacenter Administration Guide.
  • Power Management Improvements. ESX 4.1 takes advantage of deep sleep states to further reduce power consumption during idle periods. The vSphere Client has a simple user interface that allows you to choose one of four host power management policies. In addition, you can view the history of host power consumption and power cap information on the vSphere Client Performance tab on newer platforms with integrated power meters. See the vSphere Datacenter Administration Guide.

Platform Enhancements

  • Performance and Scalability Improvements. vSphere 4.1 includes numerous enhancements that increase performance and scalability.
    • vCenter Server 4.1 can support three times more virtual machines and hosts per system, as well as more concurrent instances of the vSphere Client and a larger number of virtual machines per cluster than vCenter Server 4.0. The scalability limits of Linked Mode, vMotion, and vNetwork Distributed Switch have also increased.
    • New optimizations have been implemented for AMD-V and Intel VT-x architectures, while memory utilization efficiency has been improved still further using Memory Compression. Storage enhancements have led to significant performance improvements in NFS environments. VDI operations, virtual machine provisioning and power operations, and vMotion have enhanced performance as well.

    See Configuration Maximums for VMware vSphere 4.1.

  • Reduced Overhead Memory. vSphere 4.1 reduces the amount of overhead memory required, especially when running large virtual machines on systems with CPUs that provide hardware MMU support (AMD RVI or Intel EPT).
  • DRS Virtual Machine Host Affinity Rules. DRS provides the ability to set constraints that restrict placement of a virtual machine to a subset of hosts in a cluster. This feature is useful for enforcing host-based ISV licensing models, as well as keeping sets of virtual machines on different racks or blade systems for availability reasons. See the vSphere Resource Management Guide.
  • Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk. See the vSphere Resource Management Guide.
  • vMotion Enhancements. In vSphere 4.1, vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous virtual machine migrations and faster individual virtual machine migrations. The result is a performance improvement of up to 8x for an individual virtual machine migration, and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively). See the vSphere Datacenter Administration Guide.
  • ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles. See the ESX Configuration Guide and the ESXi Configuration Guide.
  • Configuring USB Device Passthrough from an ESX/ESXi Host to a Virtual Machine. You can configure a virtual machine to use USB devices that are connected to an ESX/ESXi host where the virtual machine is running. The connection is maintained even if you migrate the virtual machine using vMotion. See the vSphere Virtual Machine Administration Guide.
  • Improvements in Enhanced vMotion Compatibility. vSphere 4.1 includes an AMD Opteron Gen. 3 (no 3DNow!™) EVC mode that prepares clusters for vMotion compatibility with future AMD processors. EVC also provides numerous usability improvements, including the display of EVC modes for virtual machines, more timely error detection, better error messages, and the reduced need to restart virtual machines. See the vSphere Datacenter Administration Guide.

Partner Ecosystem

  • vCenter Update Manager Support for Provisioning, Patching, and Upgrading EMC’s ESX PowerPath Module. vCenter Update Manager can provision, patch, and upgrade third-party modules that you can install on ESX, such as EMC’s PowerPath multipathing software. Using the capability of Update Manager to set policies using the Baseline construct and the comprehensive Compliance Dashboard, you can simplify provisioning, patching, and upgrade of the PowerPath module at scale. See the vCenter Update Manager Installation and Administration Guide.
  • User-configurable Number of Virtual CPUs per Virtual Socket. You can configure virtual machines to have multiple virtual CPUs reside in a single virtual socket, with each virtual CPU appearing to the guest operating system as a single core. Previously, virtual machines were restricted to having only one virtual CPU per virtual socket. See the vSphere Virtual Machine Administration Guide.
  • Expanded List of Supported Processors. The list of supported processors has been expanded for ESX 4.1. To determine which processors are compatible with this release, use the Hardware Compatibility Guide. Among the supported processors is the Intel Xeon 7500 Series processor, code-named Nehalem-EX (up to 8 sockets).

You can download VMware vSphere 4.1 HERE

Database Availability Group (DAG) in Exchange 2010

One of the new features of Exchange 2010 is DAG Database Availability Group. The Customer were i work now wants Exchange 2010 in a dag cluster because they have a datacenter for failback.
Because i going to implement Exchange 2010 at the customer i created a test setup.

Configuration:

Server 1 – HYPERVDC-01
OS: Microsoft Windows 2008 R2 Standard x64
IP: 192.168.150.90
Roles: Active Directory / Hyper-V

Server 2 – CHEK10-01
OS: Microsoft Windows 2008 R2 Standard x64
IP: 192.168.150.91
Roles: Exchange 2010 HT / CAS

Server 3 – CHEK10-02
OS: Microsoft Windows 2008 R2 Standard x64
IP: 192.168.150.92
Roles: Exchange 2010 HT / CAS

Server 4 – DAGEK10-01
OS: Microsoft Windows 2008 R2 Enterprise x64
IP: 192.168.150.93
Roles: Exchange 2010 MBX

Server 5 – DAGEK10-02
OS: Microsoft Windows 2008 R2 Enterprise x64
IP: 192.168.150.94
Roles: Exchange 2010 MBX

Creating the DAG

clip_image002
clip_image004

Groupname: DAG01
Witness Server: CHKEK10-01 (Microsoft says use one of the CAS or Hub Servers. You cannot use a DAG Server! If you want use a non Exchange 2010 server you must at the Exchange Trusted Subsystem group at the local administrators group.
Witness Directory: C:\DAG01
clip_image006

Add a MB server to a DAG

clip_image008clip_image010

clip_image012clip_image014

clip_image016clip_image018

clip_image020

Setting a IP address on a Database Availability Group

With the following command you can set the DAG Database Availability Group an IP address. Set-DatabaseAvailabilityGroup -Identity DAG01 -DatabaseAvailabilityGroupIpAddresses 192.168.150.96

Configuring Client Access Array for Exchange 2010

When you want to use the Client Access Array function from Exchange 2010. You have to options.
1. Use the NLB function in Windows. Check this article that i blogged: Configuring NLB for Exchange 2010 for Cas load balancing.
2. When you have 2 physical load balancers in combination with a DAG cluster.

I haven’t any pre-Created CAS arrays in my hyper-v.local domain. But you would to check of there is any pre-created CAS Arrays. Run the command below. if you didn’t create a CAS Array before, you will get nothing .

Get-ClientAccessArray
clip_image002

Then you should create new Client Access Array. Run below Cmdlet in Exchange Management Console

New-ClientAccessArray –Name “CasArray1” –Fqdn casarray.hyper-v.local -Site “Default-First-Site-Name”

clip_image004
Now we have finished creating a CAS array. Then we must associate databases with this CAS Array.
Use below CMDLet to add mailbox database to CAS array. We can attach all mailbox databases at once as shown as shown  below

Get-MailboxDatabase | Set-MailboxDatabase -RPCClientAccessServer “casarray.hyper-v.local”
clip_image006

Configuring NLB for Exchange 2010 for CAS Load Balancing

Exchange’s dependence on the Client Access Server (CAS) role has increased dramatically in Exchange 2010.  This is because, in Exchange 2010, on-network Outlook MAPI connectivity now connects to a mailbox through the CAS role via the RPC Client Access Service.  As a result, high availability of the CAS role is crucial since any failure of CAS could affect Outlook client connectivity.  For smaller implementations or those where the limitations of native Windows Network Load Balancing (NLB) are not a major problem

You need two or more Exchange 2010 servers (each with two NICs) with the CAS role installed have been deployed, you are ready to start configuring NLB to provide high availability and load balancing.  First, you must allocate a dedicated private IP address and create an associated A record in DNS for the NLB cluster. 

This IP address and name are what clients will connect to and against which the ClientAccessArray will be created.  In this blog post, I will use 192.168.150.95 and casarray.hyper-v.local
To simplify the management of your NLB cluster members, I recommend that you name each NIC’s network connection so that it is easy to understand what function the NIC serves.  For example, as depicted below, I have named the connections “LAN” (used for communication with clients and servers on the network) and “NLB” (used for internal NLB heartbeat).  This process should be repeated on all NLB cluster members.

IP configuration:
Server 1:
LAN:
IP: 192.168.150.90
Subnetmask: 255.255.255.0
Gateway: 192.168.150.254
DNS: 192.168.150.1

Server 2:
LAN:
IP: 192.168.150.91
Subnetmask: 255.255.255.0
Gateway: 192.168.150.254
DNS: 192.168.150.1
clip_image002

 

Configuring NLB – First Member

On each NLB cluster member, NLB must be installed.  With Windows 2008 R2, this can be completed simply by running the command “ServerManagerCmd -i NLB” via a command prompt.  Once NLB has been installed, launch the Network Load Balancing Manager to continue the configuration process.

clip_image003[1]

To create your new cluster, you can right-click Network Load Balancing Clusters or simply click Cluster, New.  In the New Cluster wizard, enter the name of the first server in the NLB cluster (for example, CHEK10-01) and click Connect.  This will display the available NICs on the server, at which point the NLB NIC should be chosen before clicking Next.

clip_image005

Since this is the first member of the NLB cluster, you can leave the all of the Host Parameters at their default values, as depicted below.  Please note that the Priority value should be configured as 1 for the first member.

clip_image007

Next we must configure the IP address and subnet mask of the NLB cluster, which is the IP address for which we created a DNS A record at the very beginning of this process.  In this example, this would be 192.168.150.95 and 255.255.255.0, respectively.

clip_image009

For the Cluster Parameters, we want to enter the FQDN of the DNS A record we created at the very beginning of this process (casarray.hyper-v.local).  In addition, Unicast should be selected as the desired clustered operation mode.

clip_image011

I lieve the Port Rules how they are and end with Finish
clip_image013

Let the NLB cluster converge with its first member and you should eventually see the cluster report success.

clip_image015[1]

Now you can proceed with adding your second cluster member.

Configuring NLB – Second/Subsequent Member

After the configuration of the NLB cluster itself and the first NLB cluster member has been completed, you are ready to add additional members.  Provided that NLB has been installed, you can simply right-click on your NLB cluster in the Network Load Balancing Manager and click Add Host To Cluster.

Enter the name of the second NLB cluster member, for example CHEK10-02, and click Connect.  Be sure to choose the NLB LAN NIC and click Next.

clip_image017

On the Host Parameters screen, ensure that the Priority is set to 2 (or as appropriate, depending on how many cluster members you have) and click Next.

clip_image019

Confirm that your port rules are accurate and, if they are, click Finish to add your second NLB cluster member.

clip_image021

Let the NLB cluster converge with the new member and, eventually, it should report success.

clip_image023

At this point, you have an NLB cluster with two members!

Next configure CASARRAY.

Translate »