Hyper-V Cluster – Setup Guide – Part 1


This post is part of a series of posts aimed at creating and managing a Hyper-V Cluster. This part will deal with the initial installation as well as a part of the configuration.

As a best practice, Microsoft recommends that Hyper-V be deployed on Windows Server Core to minimize overhead and reduce the attack surface. Starting with Server 2012, the GUI can be removed or installed on demand. This part of the guide aims to be on the easy side, as such, we will start with the GUI installed for the initial configuration and remove it later on. An advanced guide using Server Core directly from the start with appropriate PowerShell commands will follow later on.

This guide assumes that you have already checked your various hardware components as being certified to run Microsoft Failover clusters. Some examples to check could be the SAN firmware, fiber switches, fiber HBAs, latest certified BIOS/UEFI, NICs firmware, device drivers, SAN compatibility with ODX, etc. Verify this information with your vendors.

For starting setup, it is recommend to boot the server you are preparing with the vendor’s latest bootable discs such IBM ServerGuide or HP SmartStart or other depending on your hardware vendor. They will contain the latest drivers for your particular hardware.

For the operating system, always aim to go with Windows Datacenter edition for the Hyper-V cluster. It will allow you unlimited guest Windows licenses:

Windows Server Editions - Datacenter - Hyper-V Cluster

After setup finishes, configure the following:

Disable RDP Printer redirection: This can be configured either locally via GPEdit or deployed to all the Hyper-V Cluster nodes with a GPO. The option can be found at Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Printer Redirection –> Do not allow client printer redirection. Set this option to Enabled.

Group Policy setting for configuring Remote Desktop Printer Redirection

Restrict Paging File: Windows dynamically adjusts paging file by default and is partly based on the installed physical RAM. High end Hyper-V Cluster nodes will usually have several hundred GBs of installed memory. Limit the paging file size to a fixed size of 16GB or 32GB. Don’t forget to click Set at the end.

Windows Paging File setup for a Hyper-V Cluster node

Other settings:

  1. Remote desktop settings to allow administrator remote login
  2. Set power options to High Performance in control panel
  3. Date and time if not set earlier during initial hardware setup. Set the proper timezone
  4. Host name, preferably something descriptive such as “<country code><server role><version><node number>” or any standard that you follow in your datacenters

Networking:

One of the most important parts of creating a Hyper-V cluster is networking. I will discuss a simple implementation in this part of the guide. Below is the minimum number of NICs that I would consider:

Live Migration VM Network Management CSV Heartbeat iSCSI
1 1 1 1 1 1

This is the bare minimum. If iSCSI is not used, the number goes down to 5. Nevertheless, Hyper-V is very flexible in this aspect and a Hyper-V cluster node can be ran on a single NIC if desired though this setup is highly not recommended.

I will briefly discuss the role for every interface and mention some further recommendation for it:

  • Live Migration: This will handle Live Migration traffic when VMs are moved from one host to the other. Microsoft recommends that Live Migration has its own dedicated NIC. 10 Gbps is recommended.
  • VM Network: This will handle the traffic for the virtual machines via Virtual Switches. Depending on the workload and interface speed, more than one NIC will be needed. I would recommend a minimum of two NICs teamed for throughput and failover purposes. As above, the recommendation is that this functionality has it own NIC(s).
  • Management: All management tasks will be performed over this interface such as Hyper-V Manager, VMM agent connectivity, Failover Cluster Management, etc. Parent partition management will also pass through this interface. 1 Gbps should be enough, teaming is preferred.
  • CSV: Redirected I/O traffic will pass through this interface. In normal conditions, this interface will remain idle and all I/O should be in direct mode. Redirected I/O occurs when a host’s direct connectivity to the storage fails or a CSV is placed in Backup mode. Another less known fact is that redirected I/O also occurs when a VM’s dynamic VHD needs to be expanded and the node currently hosting the VM is not the owner of the CSV where the VHD is stored. A dedicated 10 Gbps NIC is recommended.
  • Heartbeat: Cluster heartbeats will be sent over this interface. A dedicated NIC is recommended though the trend recently has been to include this functionality with the management interface.
  • iSCSI: If iSCSI will be used, this is the interface that will handle the IP storage traffic. Again, the recommendation is dedicated NICs for this connectivity. A minimum of two NICs in MPIO mode over redundant paths is recommended. While supported, NIC teaming should be avoided for iSCSI. Consult your storage vendor for additional best practices specific to your hardware.

The trend is all about converged infrastructure these days where the physical interface for storage and network is combined. I will stick with the traditional approach for this guide and will discuss converged infrastructures in a later article. Also to note that if teaming will be used, teams needs to be created before installing and configuring the Hyper-V role. Also note that teaming disables the NIC’s RDMA capability.

For the purpose of this guide, fiber channel for storage and four NICs for connectivity will be used. Additional configuration will need to be done to the network interfaces once the Hyper-V cluster is up and running.

Management/Heartbeat: Rename the interface to Management, set the IP, gateway and DNS. Make sure that this NIC is listed first in the Network Bindings order by going to Network Connections -> Change adapters settings -> Advanced -> Advanced Settings. DNS should use this NIC for dynamic registration. Client & File and Printer Sharing for Microsoft Networks should be set:

Screenshot showing the binding order of the Management NIC for a Hyper-V cluster node

VM Network: We will use two teamed NICs for this functionality. From Server Manager, open NIC Teaming, select the two NICs for this purpose and create a new team. Set it to Switch Independent, Hyper-V Port:

Screenshot showing the VM Network Team creation to be used for the Hyper-V Cluster

In this mode, Hyper-V will load balance the traffic between the two physical interfaces based on the VM’s MAC. Preferably, connect each NIC to a different physical switch for fault tolerance. Switch ports should be trunk ports and Hyper-V will tag packets with the appropriately configured VLAN. Note that some switches support switch stacking which combines several physical switches into one logical switch. This allows you to use one of the Switch Dependent modes (Static Team or LACP) on two separate physical paths, giving you fault tolerance as well as link aggregation. Discuss this with your network administrator.

Set an IP address for the team interface. Rename the interface to a more descriptive name. DNS, gateway and DNS dynamic registration should not be set. Moreover, NetBIOS, File and Printer Sharing and Client for Microsoft Networks should be disabled. Virtual Machine Queues should be enabled. Either set it from the NIC’s device properties in Device Manager or by using the PowerShell cmdlet:

Set-NetAdapterVmq

Once VMQ is enabled for all the individual physical adapters that are part of the team, it will be enabled automatically for the teamed interface. Confirm it is enabled by running Get-NetAdapterVmq. At this point, it may show that VMQ is still disabled, this is normal since we have not installed Hyper-V yet. The correct status will be reflected once the Hyper-V role is installed:

Screenshot showing PowerShell results for Get-NetAdapterVMQ status of true and false

Live Migration/CSV: Starting with Windows 2012 and SMB 3.0, CSV traffic uses SMB Multichannel. Live Migration also uses SMB Multichannel starting with Server 2012 R2. This leverages multiple interfaces for speed and resiliency. It can also give performance improvements in single NIC scenarios. Unlike teaming, RSS and RDMA can still be used if supported by the NIC. Dedicating one NIC for CSV and one for Live Migration, in combination with SMB multichannel and Live Migration capping will provide fault tolerance and link aggregation for the traffic. More on that in a later article.

For now, let’s cap Live Migration traffic to avoid saturating the link.  We will combine Live Migration with CSV functionality for this guide. Since we are using just one NIC for this functionality, complete high availability is reduced. This is a Microsoft supported scenario though.

Set-SmbBandwidthLimit in Server 2012 R2 is a new cmdlet and can limit SMB traffic for Live Migration, among other SMB traffic. Since the link is 10 Gbps, we will limit the traffic to 50% of that:

Set-SmbBandwidthLimit -BytesPerSecond 600MB -Category LiveMigration

If the cmdlet is not available, install SMB Bandwidth Limit feature from Server Manager. For Hyper-V Clusters running on older versions of Windows Server, QoS policies can be used to achieve throttling as discussed here.

Set an appropriate IP address for the interface. Rename it to a more descriptive name. DNS, gateway and DNS dynamic registration should not be set.

That’s it for this part. In the next part, I will finish the configuration and install the required roles and features. Following that, we will create and configure the Hyper-V Cluster.


4 responses to “Hyper-V Cluster – Setup Guide – Part 1”

    • They would be more than half-way ready 🙂

      More settings would preferably need to be checked and configured before finalizing the cluster creation, I will try to finish the article on that in the near future.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.