Search This Blog

Thursday, August 14, 2025

 

Hyper-V Failover Cluster Setup Prep for SCVMM Integration

 Overview

Method of Procedure (MOP) for setting up a 2-node Hyper-V Failover Cluster with 2x10GB NICs in SET (Switch Embedded Teaming) for the VM Network, and 2x1GB NICs for Management.

MOP: Setting up a 2-Node Hyper-V Failover Cluster with SET for VM Network and 1G NICs for Management

1. Prerequisites:

·         Two servers with Windows Server 2022 Datacenter.

·         4 Network Adapters:

o    2x10GB NICs for VM and Live Migration traffic.

o    2x1GB NICs for general management of the OS.

·         Shared storage (for Cluster Shared Volumes or CSV) or a shared network location.

·         Network infrastructure that supports VLAN tagging (optional but recommended for segregation).

·         Administrative access to each Hyper-V host.

2. Install Hyper-V and Failover Clustering Features

a. Install Hyper-V Role

On each node, run the following PowerShell command to install the Hyper-V role and required management tools:

Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart

b. Install Failover Clustering Feature

On each node, run the following PowerShell command to install the Failover Clustering feature:

Install-WindowsFeature -Name Failover-Clustering

c. Reboot the servers if prompted to complete the installation.

3. Configure Network Interfaces

a. Configure the Management Network (1GB NICs)

1.      Assign IP Addresses to Management NICs on both nodes:

New-NetIPAddress -InterfaceAlias "Ethernet1" -IPAddress 192.168.10.100 -PrefixLength 24 -DefaultGateway 192.168.10.1

This IP address will be used for management and cluster communication.

2.      Make sure the 1GB NICs are not part of the SET switch.

b. Configure the VM Network (10GB NICs)

1.      You will teaming the 2x10GB NICs to create a single logical adapter that will be used for VM traffic. This will be done using Switch Embedded Teaming (SET).

4. Configure SET Switch for VM Network (2x10GB NICs)

a. Create SET Switch

On the Hyper-V host, run the following command to create a SET Switch using the 2x10GB NICs for VM Network:

New-VMSwitch -Name "VM-Switch" -NetAdapterName "Ethernet0", "Ethernet1" -AllowManagementOS $false

·         -Name "VM-Switch": This creates a virtual switch named "VM-Switch".

·         -NetAdapterName "Ethernet0", "Ethernet1": Combines Ethernet0 and Ethernet1 into a SET Switch.

·         -AllowManagementOS $false: Ensures that only VMs use this virtual switch (not the host’s management OS).

b. Create the SET Team for Load Balancing and Failover

Now, create the SET team for the 2x10GB NICs to ensure load balancing and failover:

New-NetLbfoTeam -Name "SET-Team" -TeamMembers "Ethernet0", "Ethernet1" -TeamingMode SwitchIndependent -LoadBalancingAlgorithm TransportPorts

·         -TeamingMode SwitchIndependent: Sets the teaming mode as Switch Independent, meaning the team does not require a specific switch configuration.

·         -LoadBalancingAlgorithm TransportPorts: This will distribute traffic evenly across the NICs using the Transport Ports method.

c. Verify the Team Configuration

Use the following command to verify the team setup:

Get-NetLbfoTeam

This should show the SET-Team with both Ethernet0 and Ethernet1 combined.

5. Configure VLANs (Optional, but Recommended for Traffic Segmentation)

If you are using VLANs for traffic isolation, configure the appropriate VLAN IDs for each network (e.g., VM Network VLAN 10, Live Migration VLAN 20).

a. Assign VLAN to VM Network

For each VM Network Adapter, assign the VLAN ID:

Set-VMNetworkAdapterVlan -VMName "VM1" -VlanId 10 -AdapterName "Network Adapter"

·         -VlanId 10: Assign VLAN ID 10 to the VM's network adapter.

·         -VMName "VM1": Name of the virtual machine.

·         -AdapterName "Network Adapter": The default network adapter name in the VM.

Repeat this for each VM you want to be on the VM Network (VLAN 10).

6. Create the Failover Cluster

a. Validate the Cluster Configuration

Run the Test-Cluster command to validate the cluster configuration:

Test-Cluster -Node "Server1", "Server2"

b. Create the Failover Cluster

Once the validation passes, create the failover cluster:

New-Cluster -Name "HyperV-Cluster" -Node "Server1", "Server2" -StaticAddress "192.168.10.100" -NoStorage

·         -StaticAddress "192.168.10.100": Assign an IP address for the cluster.

·         -NoStorage: We are not configuring shared storage in this step.

7. Configure Clustered Storage

a. Enable Cluster Shared Volumes (CSV)

To use Cluster Shared Volumes (CSV) for storing VMs:

Enable-ClusterStorageSpacesDirect

This command will configure shared storage for use by all nodes in the cluster. You can use iSCSI, SMB shares, or direct-attached storage for CSV.

b. Add VMs to Shared Storage

Ensure that the VMs are created on the shared storage (CSV) to allow migration between nodes.


8. Enable Live Migration

Initially this will be enabled for testing then setup a separate Live Migration switch once basic Live Migration functions are working

a. Configure Live Migration Settings

On each node, configure the Live Migration network:

Set-ClusterParameter -Name "LiveMigrationNetwork" -Value "VM-Switch"

This ensures Live Migration uses the VM-Switch virtual switch for its traffic.

b. Test Live Migration

To test Live Migration, run the following command to move a VM from Server1 to Server2:

Move-VM -Name "VM1" -DestinationHost "Server2"

Ensure that the migration happens smoothly and that the VM remains online.

 

 

c. Setup Live Migration Swtich on SET (Optional)

Using SET switch (with VLANs enabled) and Live Migration is already configured on the virtual switch, here's what you should do:

  1. Ensure Live Migration is configured on the switch (no need to configure VLANs for each individual VM network adapter).
  2. Set up the dedicated Live Migration virtual switch using SET switch:

Example:

New-VMSwitch -Name "LiveMigration-Switch" -NetAdapterName "Ethernet0", "Ethernet1" -AllowManagementOS $false

This switch is dedicated to Live Migration traffic.

  1. Ensure Live Migration network is set correctly in the Failover Cluster:

Set-ClusterParameter -Name "LiveMigrationNetwork" -Value "LiveMigration-Switch"

9. Test Failover and High Availability

a. Simulate Failover

Simulate a failover by shutting down Server1 (one node) and verifying that the VM automatically moves to Server2:

Stop-ClusterNode -Name "Server1"

Ensure that the VM is up and running on Server2.

10. Final Validation

1.      Check Cluster Health:

o    Open Failover Cluster Manager and verify that the cluster shows as healthy.

o    Ensure all nodes and shared volumes (CSV) are accessible.

2.      Verify VM Availability:

o    Confirm that VMs are properly configured, stored on shared storage, and accessible after a failover.


 

Summary Checklist:

1.      Install Hyper-V and Failover Clustering on both nodes.

2.      Configure SET Switch with 2x10GB NICs for VM Network.

3.      Set up management network with 2x1GB NICs (isolated for management traffic).

4.      Assign VLANs for VM Network, Live Migration, and Management Network.

5.      Create and validate Failover Cluster and configure Cluster Shared Volumes (CSV).

6.      Enable and configure Live Migration.

7.      Test Failover and Live Migration to ensure high availability.

8.      Final Validation of the cluster health and VM availability.


 

Test Plan for 2-Server Hyper-V Failover Cluster

Objective:

To validate the installation and configuration of a 2-server Hyper-V Failover Cluster with SET switches, including network connectivity, live migration, and failover functionality.

Test Scope:

·         Cluster Functionality

·         Network Configuration (Management, VM, Live Migration Active)

·         Virtual Machine High Availability and Live Migration

·         Failover Testing

Test Environment:

·         Cluster Nodes: Server 1 and Server 2

·         Operating System: Windows Server 2022 Datacenter

·         Clustered Resources: Hyper-V, Virtual Machines, Storage

·         Networks Configured:

o    Management Network

o    VM Network

o    Integrated or Independent Live Migration Network

Test Cases

1. Cluster Configuration Verification

Objective: Verify that the Hyper-V Failover Cluster is correctly configured and the cluster nodes are visible.

Test Steps:

1.      Open Failover Cluster Manager.

2.      Verify that Server1 and Server2 are listed as cluster nodes under the Nodes section.

3.      Check the Clustered Roles section to ensure no errors are displayed.

4.      Run the Validate a Configuration wizard:

o    Launch PowerShell and run Test-Cluster to check the configuration for any issues.

Expected Results:

·         Both nodes should appear without errors.

·         The cluster should pass the validation tests with no critical errors.

2. Network Connectivity Check

Objective: Verify that the Management, VM, and Live Migration networks are functioning properly.

Test Steps:

1.      Management Network:

o    Ping 192.168.10.1 from both nodes.

o    Check that you can access Cluster Shared Volumes (CSV) over this network.

2.      VM Network:

o    Check connectivity between VMs on both nodes by pinging each VM from one server to another.

3.      Live Migration Network:

o    Test connectivity between the two servers using the Live Migration IPs (e.g., 192.168.30.1).

Expected Results:

·         Each node should successfully ping its respective IP address on the Management network.

·         VMs on the VM Network should be able to communicate with each other.

·         Live Migration IPs should be reachable from both nodes.

 

3. Virtual Machine Creation and Validation

Objective: Ensure that Virtual Machines (VMs) can be created, started, and run properly on the cluster.

Test Steps:

1.      Create a new VM on Server 1 using Hyper-V Manager or PowerShell.

2.      Start the VM on Server 1 and verify that it runs without issues.

3.      Check that the VM is listed in the Failover Cluster Manager under Clustered Roles.

Expected Results:

·         The VM should be successfully created and powered on.

·         The VM should show as a clustered role in Failover Cluster Manager.

 

4. Live Migration of VM

Objective: Validate that Live Migration of VMs between nodes works without interruptions.

Test Steps:

1.      Move a running VM from Server 1 to Server 2 using Hyper-V Manager or PowerShell:

Move-VM -Name "VM-Name" -DestinationHost "Server2"

2.      Check that the VM successfully migrates without any errors.

3.      Verify that the VM continues to run without issues after migration.

Expected Results:

·         The VM should migrate without errors.

·         The VM should remain operational after migration.

5. Failover Testing

Objective: Test the failover capability of the cluster by simulating a failure scenario and ensuring that the resources are automatically moved to the other node.

Test Steps:

1.      Simulate a failure on Server 1 by shutting down or disconnecting the network interface.

2.      Verify that the Failover Cluster automatically moves the VMs from Server 1 to Server 2 without downtime.

3.      If Server 1 is brought back online, check that the VMs fail back to Server 1 automatically.

4.      Ensure that the Failover Cluster Manager displays the correct status for the moved VMs.

Expected Results:

·         When Server 1 fails, VMs should automatically move to Server 2.

·         When Server 1 is restored, the VMs should fail back automatically.

·         Failover should occur without manual intervention and minimal downtime.

 

6. Storage Failover Validation

Objective: Ensure that the shared storage (CSV) is functioning correctly and can be accessed by both nodes.

Test Steps:

1.      Disconnect the storage from Server 1 (e.g., iSCSI or SMB share).

2.      Check if Server 2 can access the storage and continue to serve the VMs.

3.      Reconnect the storage to Server 1 and check that it can access the shared storage again.

Expected Results:

·         Server 2 should be able to continue accessing the shared storage during the failure.

·         Once the storage is reconnected to Server 1, it should be able to access the storage without issues.

7. Network Failover Testing

Objective: Verify that network failover occurs correctly between the Management, VM, and Live Migration networks.

Test Steps:

1.      Disconnect the Management network on Server 1 and check if the management interface moves to Server 2.

2.      Repeat this process for the VM Network and Live Migration Network, ensuring each network can failover to the other server without interrupting services.

Expected Results:

·         Each network (Management, VM, Live Migration) should failover successfully to the other server.

·         There should be minimal downtime during network failover, with the VM traffic unaffected.


 

Test Conclusion and Reporting

·         Test Completion: All tests should be completed within a specific time frame (e.g., 2 hours).

·         Issues Identified: Any issues encountered during the test should be logged, with troubleshooting steps recorded.

·         Verification: Once all tests are successful, verify the cluster's overall health and performance through Failover Cluster Manager and other monitoring tools.

·         Sign-off: After successful testing, sign off on the deployment and move on to the next stage (e.g., adding additional VMs, SCVMM integration, etc.).

 

This Test Plan ensures that all key components of the 2-node Hyper-V Failover Cluster are properly validated, including Live Migration, failover functionality, and network resilience. By performing these tests, you will ensure that your environment is fully operational and resilient.

 

 

 

Citrix VPX Migration Advisory for Load Balancer Modernization

Method of Procedure

1. Pre-Migration Planning

Objective: Ensure all necessary preparations are in place before starting the migration.

1.1 Stakeholder Engagement

·         Step 1: Schedule a meeting with VPX workload owners to discuss migration goals and expectations.

·         Step 2: Identify critical applications and determine migration priorities.

·         Step 3: Confirm migration windows, downtime expectations, and business continuity requirements.

·         Step 4: Document stakeholder feedback and agreed-upon timelines.

1.2 Inventory and Configuration Review

·         Step 1: Perform a complete audit of current SDX appliances (SDX-14020 and SDX-15030-25G).

·         Step 2: Verify that the SDX-16240 appliances are properly racked, powered, and networked.

·         Step 3: Document current network configurations, load balancing policies, and VPX instances on legacy SDX appliances.

·         Step 4: Review the Flex Licensing setup for compatibility with SDX-16240.

1.3 Backup and Validation

·         Step 1: Back up the current configurations of all VPX instances on SDX-14020 and SDX-15030-25G.

·         Step 2: Verify the backup by performing a test restore to a lab environment.

·         Step 3: Take snapshots of the current SDX appliances to preserve the state prior to migration.

1.4 Review Migration Strategy

·         Step 1: Define the phased migration plan (2-3 weeks).

·         Step 2: Schedule weekly progress check-ins with stakeholders to validate progress.

·         Step 3: Ensure that rollback plans and contingency measures are in place.


 

2. Migration Execution

Objective: Migrate Citrix VPX instances to the new SDX-16240 appliances.

2.1 Set Up the New SDX-16240 Appliances

·         Step 1: Confirm the physical setup of SDX-16240 appliances (racking, power, and network).

·         Step 2: Configure High Availability (HA) mode on the SDX-16240 appliances using the Citrix ADC console.

·         Step 3: Verify network connectivity, including VLAN configurations and routing on SDX-16240.

·         Step 4: Ensure that Flex Licensing is enabled and configured on the SDX-16240 appliances.

2.2 Initial Migration (Pilot Phase)

·         Step 1: Select 1-2 low-priority VPX instances for the pilot migration.

·         Step 2: Deploy the selected VPX instances to the new SDX-16240 appliances.

·         Step 3: Upgrade the VPX instances to the latest supported software version.

·         Step 4: Validate Flex Licensing integration by verifying license application.

·         Step 5: Test load balancing functionality for the migrated VPX instances (ensure correct traffic distribution).

·         Step 6: Conduct application tests to verify that the migrated VPX instances perform as expected.

2.3 Full Migration (Batch Phases)

·         Step 1: Migrate Phase 1 (5-7 VPX instances) from SDX-14020 and SDX-15030-25G appliances based on the established priority list.

·         Step 2: For each VPX migration:

o    Migrate the VPX instance to the SDX-16240.

o    Upgrade to the latest supported software version.

o    Verify the Flex Licensing is applied correctly.

o    Apply relevant load balancing policies to each migrated instance.

·         Step 3: Validate that migrated instances function correctly, ensuring no disruption to end-users.

·         Step 4: Repeat for Phase 2 (migrate remaining VPX instances).

o    Confirm that the HA configuration is properly maintained throughout all migrations.


 

2.4 Verification After Each Migration

·         Step 1: Conduct post-migration checks on each VPX instance:

o    Ensure network connectivity (ping, routing).

o    Verify session stability and load balancing.

·         Step 2: Stress-test the migrated VPX instances to ensure they can handle expected loads.

·         Step 3: Conduct application validation for each migrated instance to ensure functionality.

3. Post-Migration Validation

Objective: Confirm all systems are functioning optimally after the migration.

3.1 Network and Application Testing

·         Step 1: Test network performance across the SDX-16240 appliances.

o    Ensure VLANs and routing configurations are correct.

o    Verify network latency and throughput are within acceptable ranges.

·         Step 2: Conduct end-to-end application testing to confirm migrated applications are working as expected.

o    Validate that load balancing policies are being enforced correctly.

3.2 Licensing Validation

·         Step 1: Confirm that Citrix Flex Licensing is fully functional and applied to the SDX-16240 appliances.

·         Step 2: Verify that licenses are distributed appropriately across the VPX instances.

3.3 Performance Benchmarking

·         Step 1: Compare performance metrics before and after migration (latency, throughput, resource utilization).

·         Step 2: Conduct load testing to validate the system can handle the anticipated traffic volume.


 

4. Knowledge Transfer and Handover

Objective: Ensure the operational team can effectively manage the new SDX-16240 appliances.

4.1 Documentation

·         Step 1: Provide detailed documentation of the new infrastructure (networking, VPX configurations, Flex Licensing, HA setup).

·         Step 2: Include configuration files, migration logs, and troubleshooting steps.

4.2 Training

·         Step 1: Provide training sessions for the operational team on managing SDX-16240 appliances.

o    Topics should include load balancing, HA management, and Citrix Flex Licensing.

·         Step 2: Include troubleshooting guidance in the training, such as session recovery and traffic rerouting.

4.3 Final Review

·         Step 1: Conduct a post-migration review meeting to confirm all steps were followed and that objectives were met.

·         Step 2: Address any issues or concerns raised during the migration and ensure they are resolved.

5. Risk Mitigation and Contingency Planning

Objective: Prepare for unforeseen issues and ensure a smooth rollback if necessary.

5.1 Rollback Plan

·         Step 1: Define a rollback procedure to restore VPX instances to the SDX-14020/SDX-15030-25G appliances if needed.

·         Step 2: Maintain the backup configurations from the pre-migration step to restore in case of critical failure.

5.2 Monitoring and Alerts

·         Step 1: Set up monitoring tools to track appliance health, performance, and licensing status.

·         Step 2: Configure alerts for critical issues like resource over-utilization, session drops, or license mismatches.

6. Finalization and Closure

Objective: Officially close the migration project and handover to operations.

6.1 Final Validation

·         Step 1: Conduct a final round of validation to ensure all VPX instances are functional and fully migrated.

·         Step 2: Ensure all services are stable and performing optimally.

6.2 Project Closure

·         Step 1: Finalize all documentation and ensure operational teams are comfortable with the new setup.

·         Step 2: Officially close the project and transition management responsibilities to the operations team.


 

References

 

https://www.citrix.com/products/citrix-adc/

https://docs.citrix.com/en-us/citrix-adc/current-release/deployment-guides.html

https://docs.citrix.com/en-us/citrix-adc/current-release/migrate/migration-vpx-to-sdx.html

https://docs.citrix.com/en-us/citrix-adc/current-release/high-availability.html

https://docs.citrix.com/en-us/citrix-adc/current-release/backup-and-restore.html

https://docs.citrix.com/en-us/citrix-adc/current-release/load-balancing.html

https://docs.citrix.com/en-us/citrix-adc/current-release/networking.html