Hyper-V Failover Cluster Setup Prep for SCVMM Integration
Method of Procedure (MOP) for setting up a 2-node Hyper-V Failover
Cluster with 2x10GB
NICs in SET
(Switch Embedded Teaming)
for the VM Network, and 2x1GB NICs for Management.
MOP:
Setting up a 2-Node Hyper-V Failover Cluster with SET for VM Network and 1G
NICs for Management
1.
Prerequisites:
·
Two servers
with Windows Server 2022 Datacenter.
·
4 Network Adapters:
o
2x10GB NICs
for VM and Live Migration traffic.
o
2x1GB NICs
for general management of the OS.
·
Shared storage
(for Cluster Shared Volumes or CSV) or a shared network location.
·
Network infrastructure
that supports VLAN tagging (optional but recommended for segregation).
·
Administrative access to each Hyper-V host.
2. Install
Hyper-V and Failover Clustering Features
a. Install Hyper-V Role
On each node, run the following PowerShell
command to install the Hyper-V role and required
management tools:
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
b. Install Failover
Clustering Feature
On each node, run the following PowerShell
command to install the Failover Clustering
feature:
Install-WindowsFeature -Name Failover-Clustering
c. Reboot the servers
if prompted to complete the installation.
3.
Configure Network Interfaces
a. Configure the Management
Network (1GB NICs)
1. Assign
IP Addresses to Management NICs on both nodes:
New-NetIPAddress -InterfaceAlias "Ethernet1" -IPAddress 192.168.10.100 -PrefixLength 24 -DefaultGateway 192.168.10.1
This IP address will be used for management
and cluster communication.
2. Make
sure the 1GB NICs are not part of the SET switch.
b. Configure the VM Network
(10GB NICs)
1. You
will teaming the 2x10GB NICs
to create a single logical adapter that will
be used for VM traffic. This will be
done using Switch Embedded Teaming (SET).
4.
Configure SET Switch for VM Network (2x10GB NICs)
a. Create SET Switch
On the Hyper-V host, run the
following command to create a SET Switch
using the 2x10GB NICs for VM
Network:
New-VMSwitch -Name "VM-Switch" -NetAdapterName "Ethernet0", "Ethernet1" -AllowManagementOS $false
·
-Name
"VM-Switch"
: This creates a virtual switch named "VM-Switch".
·
-NetAdapterName
"Ethernet0", "Ethernet1"
: Combines Ethernet0
and Ethernet1 into a SET Switch.
·
-AllowManagementOS
$false
: Ensures that only VMs use this virtual switch (not
the host’s management OS).
b. Create the SET Team for
Load Balancing and Failover
Now, create the SET team for the 2x10GB
NICs to ensure load balancing
and failover:
New-NetLbfoTeam -Name "SET-Team" -TeamMembers "Ethernet0", "Ethernet1" -TeamingMode SwitchIndependent -LoadBalancingAlgorithm TransportPorts
·
-TeamingMode
SwitchIndependent
: Sets the teaming mode as Switch
Independent, meaning the team does not require a specific
switch configuration.
·
-LoadBalancingAlgorithm
TransportPorts
: This will distribute traffic evenly across
the NICs using the Transport Ports method.
c. Verify the Team
Configuration
Use the following command to verify the team setup:
Get-NetLbfoTeam
This should show the SET-Team with both Ethernet0
and Ethernet1 combined.
5.
Configure VLANs (Optional, but Recommended for Traffic Segmentation)
If you are using VLANs for traffic isolation, configure the appropriate VLAN
IDs for each network (e.g., VM Network
VLAN 10, Live Migration VLAN 20).
a. Assign VLAN to VM Network
For each VM Network Adapter,
assign the VLAN ID:
Set-VMNetworkAdapterVlan -VMName "VM1" -VlanId 10 -AdapterName "Network Adapter"
·
-VlanId
10
: Assign VLAN ID 10 to the VM's network adapter.
·
-VMName
"VM1"
: Name of the virtual machine.
·
-AdapterName
"Network Adapter"
: The default network adapter name
in the VM.
Repeat this for each VM you want to be on the VM
Network (VLAN 10).
6. Create
the Failover Cluster
a. Validate the Cluster
Configuration
Run the Test-Cluster command to
validate the cluster configuration:
Test-Cluster -Node "Server1", "Server2"
b. Create the Failover
Cluster
Once the validation passes, create the failover cluster:
New-Cluster -Name "HyperV-Cluster" -Node "Server1", "Server2" -StaticAddress "192.168.10.100" -NoStorage
·
-StaticAddress
"192.168.10.100"
: Assign an IP address for the
cluster.
·
-NoStorage
:
We are not configuring shared storage in this step.
7.
Configure Clustered Storage
a. Enable Cluster Shared
Volumes (CSV)
To use Cluster Shared Volumes (CSV)
for storing VMs:
Enable-ClusterStorageSpacesDirect
This command will configure shared storage for use by all nodes in the
cluster. You can use iSCSI, SMB shares, or direct-attached storage for CSV.
b. Add VMs to Shared Storage
Ensure that the VMs are created on the shared storage
(CSV) to allow migration between nodes.
8. Enable
Live Migration
Initially this will be enabled for testing then setup a
separate Live Migration switch once
basic Live Migration functions are
working
a. Configure Live Migration
Settings
On each node, configure the Live Migration
network:
Set-ClusterParameter -Name "LiveMigrationNetwork" -Value "VM-Switch"
This ensures Live Migration uses the VM-Switch
virtual switch for its traffic.
b. Test Live Migration
To test Live Migration, run the
following command to move a VM from Server1
to Server2:
Move-VM -Name "VM1" -DestinationHost "Server2"
Ensure that the migration happens smoothly and that the VM remains online.
c. Setup Live Migration
Swtich on SET (Optional)
Using SET switch (with VLANs enabled) and Live Migration is
already configured on the virtual switch, here's what you should do:
- Ensure Live Migration is configured on the switch (no need to configure VLANs for each individual VM
network adapter).
- Set up the dedicated Live Migration virtual switch using SET switch:
Example:
New-VMSwitch
-Name "LiveMigration-Switch" -NetAdapterName "Ethernet0",
"Ethernet1" -AllowManagementOS $false
This
switch is dedicated to Live Migration traffic.
- Ensure Live Migration network is set correctly in the
Failover Cluster:
Set-ClusterParameter
-Name "LiveMigrationNetwork" -Value "LiveMigration-Switch"
9. Test
Failover and High Availability
a. Simulate Failover
Simulate a failover by shutting down Server1
(one node) and verifying that the VM automatically moves to Server2:
Stop-ClusterNode -Name "Server1"
Ensure that the VM is up and running on Server2.
10. Final
Validation
1. Check
Cluster Health:
o
Open Failover Cluster
Manager and verify that the cluster shows as healthy.
o
Ensure all nodes and shared volumes (CSV) are
accessible.
2. Verify
VM Availability:
o
Confirm that VMs are properly configured, stored
on shared storage, and accessible after a failover.
Summary
Checklist:
1. Install
Hyper-V and Failover Clustering on both nodes.
2. Configure
SET Switch with 2x10GB NICs
for VM Network.
3. Set
up management network with 2x1GB NICs
(isolated for management traffic).
4. Assign
VLANs for VM Network, Live
Migration, and Management Network.
5. Create
and validate Failover Cluster and configure Cluster
Shared Volumes (CSV).
6. Enable
and configure Live Migration.
7. Test
Failover and Live Migration to ensure
high availability.
8. Final
Validation of the cluster health and VM availability.
Test Plan
for 2-Server Hyper-V Failover Cluster
Objective:
To validate the installation and configuration of a 2-server
Hyper-V Failover Cluster with SET switches,
including network connectivity, live
migration, and failover
functionality.
Test Scope:
·
Cluster Functionality
·
Network Configuration
(Management, VM, Live Migration Active)
·
Virtual Machine High
Availability and Live Migration
·
Failover Testing
Test Environment:
·
Cluster Nodes:
Server 1 and Server 2
·
Operating System:
Windows Server 2022 Datacenter
·
Clustered Resources:
Hyper-V, Virtual Machines, Storage
·
Networks Configured:
o
Management Network
o
VM Network
o
Integrated or Independent Live Migration Network
Test Cases
1. Cluster
Configuration Verification
Objective: Verify that the Hyper-V Failover
Cluster is correctly configured and the cluster nodes are visible.
Test Steps:
1. Open
Failover Cluster Manager.
2. Verify
that Server1 and Server2
are listed as cluster nodes under the Nodes
section.
3. Check
the Clustered Roles section to ensure no errors are
displayed.
4. Run
the Validate a Configuration wizard:
o
Launch PowerShell and run Test-Cluster
to check the configuration
for any issues.
Expected Results:
·
Both nodes should appear without errors.
·
The cluster should pass the validation tests
with no critical errors.
2. Network
Connectivity Check
Objective: Verify that the Management, VM, and
Live Migration networks are functioning properly.
Test Steps:
1. Management
Network:
o
Ping 192.168.10.1
from both nodes.
o
Check that you can access Cluster
Shared Volumes (CSV) over this network.
2. VM
Network:
o
Check connectivity between VMs
on both nodes by pinging each VM from one server to another.
3. Live
Migration Network:
o
Test connectivity between the two servers using
the Live Migration IPs (e.g., 192.168.30.1
).
Expected Results:
·
Each node should successfully ping its
respective IP address on the Management network.
·
VMs on
the VM Network should be able to communicate with each other.
·
Live Migration IPs
should be reachable from both nodes.
3. Virtual
Machine Creation and Validation
Objective: Ensure that Virtual Machines (VMs)
can be created, started, and run properly on the cluster.
Test Steps:
1. Create
a new VM on Server 1
using Hyper-V Manager or PowerShell.
2. Start
the VM on Server 1 and verify that
it runs without issues.
3. Check
that the VM is listed in the Failover Cluster Manager
under Clustered Roles.
Expected Results:
·
The VM should be successfully created and
powered on.
·
The VM should show as a clustered role in
Failover Cluster Manager.
4. Live
Migration of VM
Objective: Validate that Live
Migration of VMs between nodes works without interruptions.
Test Steps:
1. Move
a running VM from Server 1 to Server
2 using Hyper-V Manager or
PowerShell:
Move-VM -Name "VM-Name" -DestinationHost "Server2"
2. Check
that the VM successfully migrates without any errors.
3. Verify
that the VM continues to run without issues after migration.
Expected Results:
·
The VM should migrate without errors.
·
The VM should remain operational after
migration.
5.
Failover Testing
Objective: Test the failover capability of the
cluster by simulating a failure scenario and ensuring that the resources are
automatically moved to the other node.
Test Steps:
1. Simulate
a failure on Server 1 by shutting down
or disconnecting the network interface.
2. Verify
that the Failover Cluster
automatically moves the VMs from Server
1 to Server 2 without
downtime.
3. If
Server 1 is brought back online, check that the VMs
fail back to Server 1 automatically.
4. Ensure
that the Failover Cluster Manager
displays the correct status for the moved VMs.
Expected Results:
·
When Server 1
fails, VMs should automatically move to Server 2.
·
When Server 1
is restored, the VMs should fail back automatically.
·
Failover should occur without manual
intervention and minimal downtime.
6. Storage
Failover Validation
Objective: Ensure that the shared storage (CSV)
is functioning correctly and can be accessed by both nodes.
Test Steps:
1. Disconnect
the storage from Server 1 (e.g., iSCSI or
SMB share).
2. Check
if Server 2 can access the storage and continue to
serve the VMs.
3. Reconnect
the storage to Server 1 and check that
it can access the shared storage again.
Expected Results:
·
Server 2
should be able to continue accessing the shared storage during the failure.
·
Once the storage is reconnected to Server
1, it should be able to access the storage without issues.
7. Network
Failover Testing
Objective: Verify that network failover occurs
correctly between the Management, VM,
and Live Migration networks.
Test Steps:
1. Disconnect
the Management network on Server
1 and check if the management interface moves to Server
2.
2. Repeat
this process for the VM Network and Live
Migration Network, ensuring each network can failover to the
other server without interrupting services.
Expected Results:
·
Each network (Management, VM, Live Migration)
should failover successfully to the other server.
·
There should be minimal downtime during network
failover, with the VM traffic unaffected.
Test
Conclusion and Reporting
·
Test Completion:
All tests should be completed within a specific time frame (e.g., 2 hours).
·
Issues Identified:
Any issues encountered during the test should be logged, with troubleshooting
steps recorded.
·
Verification:
Once all tests are successful, verify the cluster's overall health and
performance through Failover Cluster Manager
and other monitoring tools.
·
Sign-off:
After successful testing, sign off on the deployment and move on to the next
stage (e.g., adding additional VMs, SCVMM integration, etc.).
This Test Plan ensures that
all key components of the 2-node Hyper-V Failover Cluster
are properly validated, including Live Migration,
failover functionality, and network
resilience. By performing these tests, you will ensure that
your environment is fully operational and resilient.