Objectives 3.3 are broke down as the following
- Configure appropriate NIC teaming failover type and related physical network settings
- Determine and apply failover settings according to a deployment plan
- Configure and manage network I/O control 3
- Determine and configure vDS port binding settings according a deployment plan
Configure appropriate NIC teaming failover type and related physical network settings
Determine and apply failover settings according to a deployment plan
To increase capacity of a virtual switch and add redundancy teaming and failover policies must be configured. To determine how the virtual switch distribute the traffic between the physical NIC team you must configure the load balancing algorithms, to determine how the traffic is rerouted after a failure or how many and which physical NICs belong to the team you must configure the failover order.
NIC teaming policy can be set at the switch or port group level on a vSS and at a port group level for a vDS. All ports on the physical switch in the same team must be in the same L2 broadcast domain.
To create a network team two or more physical NICs must be assigned to the switch. The failover order must then be decided whether these NICs are active or standby. For a vSS this can be set at the switch and set on the port group, the port group can have different settings from the switch also by overriding the inherited policy. For a vDS the failover order is set at the port group. To change the failover order - Web Client - Host - Manage - Networking - Virtual Switches - vSS - Port Group - Edit Settings - Teaming Failover
Simply move the vmnics in the right order. The following options can be configured for failovers
Network Failure Detection
- Link Status Only - relies only on the link status the network adapter provides, detects failures, such as removed cables and physical switch power failures.
- Beacon Probing - Sends out and listens for Ethernet broadcast frames that physical NICs send to detect link failure in the team. Hosts send beacons every 1 second - useful for detecting failures on the switch that doesnt cause a link-down event. Use this with 3 or more NICs in the team.
Notify Switches
- Yes - when physical NIC connect to the virtual switch or when traffic is routed through a different physical NIC in the team, the virtual switch sends notifications over the network to update the lookup tables on the physical switches lowering latency when a failover or migration occurs.
- No
Failback Policy
- Yes - if a failed physical NIC returns online, the virtual switch sets the NIC back to active by replacing the standby NIC that took over its slot. If the first NIC in the order is failing intermittently this will lead to frequent changes, the physical switch should follow guidelines to minimise issues - Spanning Tree Protocol (STP) should be disabled, PortFast mode should be enabled and trunking negotiation should be disabled.
- No
To use more then one active NIC in the team, Load Balancing Algorithms must be configured correctly to determine how the traffic is distributed between the NICs. When the ports are teamed on the physical switch the virtual switch’s load balancing algorithm must support it, for instance the physical switch could be setup with a Cisco EtherChannel. The following options are available
- Route based on IP hash – selects an uplink based on a hash of the source and destination IP addresses of each packet. Requires physical switch to be configured with EtherChannel
- Route based on the originating virtual port – selects an uplink based on the virtual port IDs on the switch. After the virtual switch selects an uplink for a virtual machine or a VMkernel adapter, it always forwards traffic through the same uplink for this virtual machine or VMkernel adapter.
- Route based on source MAC hash – selects an uplink based on a hash of the source Ethernet
- Use explicit failover order – no load balancing is performed here but is rather selected from a list of active adapters
- Route Based on Physical NIC Load - the virtual switch checks the actual load of the uplinks and takes steps to reduce it on overloaded uplinks. Only available with vDS
To change the the load balancing algorithm for a vSS
Web Client - Host - Manage - Networking - Virtual Switches - vSS - Port Group - Edit Settings - Teaming Failover
To change for a vDS
Web Client - Networking - vDS - Edit Settings - Teaming and Failover
Configure and manage Network I/O Control 3
vSphere Network I/O Control (NIOC) version 3 introduces a mechanism to reserve bandwidth for system traffic based on the capacity of the physical adapters on a host. It enables fine-grained resource control at the VM network adapter level similar to allocating CPU and memory resources. Version 3 of NIOC offers improved network reservations across the entire switch.
Resource management can be configured for system traffic and for VM traffic. System traffic is associated with a host whereas VM traffic can change hosts when a VM is migrated.
To enable NIOC - Web Client - Networking - vDS - Edit Settings. Make sure the drop down box for Network I/O Control is set to Enabled
To view the default Resource Allocation go to - Web Client - Network - vDS - Manage - Resource Allocation - System Traffic
Select any to change by selecting edit
To set resource allocations per VM - Web Client - VM and Templates - VM - Edit Settings - Network Adapter. Set share Share / Reservation / Limit here
To create a new custom resource pool that can be assigned to vDS port groups - Web Client - Networking - vDS - Manage - Resource Allocation - Network Resource Pools - Add
Now if I edit the vDS port group - Web Client - Networking - vDS - Port Group - General - Network Resource Pool
For more details about reservations, limits, shares, resource pools see VMware Networking Guide.
Determine and configure vDS port binding settings according a deployment plan
Final section relates to vDS port bindings. To change these settings - Web Client - Networking - vDS - Port Group - Edit Settings - General. The options here as as follows
Port Binding
- Static Binding - Assign a port to a virtual machine when the virtual machine connects to the distributed port group.
- Dynamic Binding - Assign a port to a virtual machine the first time the virtual machine powers on after it is connected to the distributed port group. Dynamic binding has been deprecated since ESXi 5.0.
- Ephemeral - No port binding. You can assign a virtual machine to a distributed port group with ephemeral port binding also when connected to the host.
Port Allocation
- Elastic - The default number of ports is eight. When all ports are assigned, a new set of eight ports is created. This is the default.
- Fixed - The default number of ports is set to eight. No additional ports are created when all ports are assigned.