VCAP6-DCV Deployment – Objective 4.2 – Implement and Manage Complex DRS Solutions


Main Study Page

Objectives 4.2 are broke down as the following

  • Configure DPM, including appropriate DPM threshold
  • Configure / Modify EVC mode on an existing DRS cluster
  • Create DRS and DPM alarms
  • Configure applicable power management settings for ESXi hosts
  • Configure DRS cluster for efficient/optimal load distribution
  • Properly apply virtual machine automation levels based upon application requirements
  • Administer DRS / Storage DRS
  • Create DRS / Storage DRS affinity and anti-affinity rules
  • Configure advanced DRS / Storage DRS settings
  • Configure and Manage vMotion / Storage vMotion
  • Create and manage advanced resource pool configurations

Configure DPM, including appropriate DPM threshold

vSphere Distributed Power Management (DPM) feature allows a DRS enabled cluster to reduce power consumption by automatically powering hosts on and off as they are needed. DPM can be configured to use hardware IPMI / ILO interfaces providing the hardware supports it. DPM can also use Wake on LAN, the VMKernel adapter configured for vMotion must support Wake on LAN to be able to use this, support for this can be checked from the Web Client - Host - Manage - Networking - Physical Adapters

To test Wake on LAN simply right click a host - Power - Enter Standby Mode, wait for it to shutdown then right click the host again and select Power On. If the host powers on successfully then Wake on LAN can be used for DPM.

To configure IPMI / ILO the details for the interface must be added. Web Client - Host - Manage - Settings - System - Power Management

Once you know the method the hosts will use to power on and off DPM can be configured. Web Client - Host and Clusters - Cluster - Manage - Settings - vSphere DRS - Edit. From here select whether vCenter will use DPM automatically by setting Automatic or whether vCenter just recommends a host should be powered down by setting this to Manual. Also set how aggressive you want DPM to be.


Configure / Modify EVC mode on an existing DRS cluster

Enhanced vMotion Compatibility (EVC) is used to ensure vMotion compatibility across all hosts, it also ensures all hosts in the cluster present the same CPU feature set to VMs when the CPUs are different for the hosts. CPU compatibility masks are applied to individual VMs to hide certain CPU features to allow for vMotions across hosts with different CPU types.

If the CPUs match this can be disabled but if it needs to be enabled different modes exist, first for Intel or AMD then you must pick the mode depending on the generation / features of the CPUs. VMware have a great article that goes into detail here. Web Client - Host and Clusters - Cluster - Manage - Settings - VMware EVC - Edit


Create DRS and DPM alarms

DPM and DRS can be monitored using task based alarms in vCenter. For DPM you can create alarms for the following tasks

  • DrsEnteringStandbyModeEvent
  • DrsEnteredStandbyModeEvent
  • DrsExitingStandbyModeEvent
  • DrsExitedStandbyModeEvent

vCenter also has a built in alarm to alert if a host cannot exit standby mode called Exit Standby Mode.

Two alarm that can be used for Storage DRS are Storage DRS Recommendation and Storage DRS is not supported on a host.


Configure applicable power management settings for ESXi hosts

Power management settings can be configured on the host to tune the power efficiency, the higher the performance policy the lower the power efficiency . Web Client - Host - Manage - Settings - Hardware - Power Management


Configure DRS cluster for efficient/optimal load distribution

Properly apply virtual machine automation levels based upon application requirements

Configuring DRS is very much the same as DPM, Web Client - Host and Clusters - Cluster - Manage - Settings - vSphere DRS - Edit. From here select whether vCenter will use DRS automatically by setting Automatic or whether vCenter just recommends a host should be powered down by setting this to Manual. Also set how aggressive you want DRS to be. Also set here individual VM level automation.


Administer DRS / Storage DRS

Some of the objectives in this section overlap with this being one of them, administrating DRS is fairly simple and is shown above. For Storage DRS go to Web Client - Storage - SDRS Cluster - Manage - Settings - Storage DRS - Edit. Here choose the levels of automation or choose to leave it manual. Storage DRS can be configured to move resources around once a space threshold has exceeded of high I/O has been detected.


Create DRS / Storage DRS affinity and anti-affinity rules

DRS affinity rules can be configured to keep VMs together on the same host known as VM-VM affitinty rules or to keep them apart on different hosts using VM-VM anti-affinity rules. It is also possible to setup Host-VM affinity and Host-VM anti-affinity rules to always pin a particular VM to a host.

Setting up a VM-VM affinity rule is fairly simple - Web Client - Host and Clusters - Cluster - Manage - Settings - VM/Host Rules - Add. I create a new rule and give it a name, for an affinity rule I select Keep Virtual Machines Together then add the VMs I want

To create an anti affinity rule I go through the same process but this time choose Separate Virtual Machines

To create an VM-Host rule there are a few more steps. I must first create a VM group and a host group, the rules are then created using the groups. Web Client - Host and Clusters - Cluster - Manage - Settings - VM/Host Groups - Add. First I will create a VM Group, I give it a name and add the VM/VMs to it

Then I got through the same process but this time create a Host Group and add the host/hosts

Now the new groups are listed

I then go to add a rule similar to a VM-VM rule but this time I select Virtual Machine to Hosts drop down box. This then brings up the options to select the groups I have created and what sort of rule it will be

Change the rule type to match the requirements


Configure and Manage vMotion / Storage vMotion

vMotion and Storage vMotion requires a VMKernel interface, to create and manage VMKernel interfaces go to Web Client - Host - Manage - Networking - VMKernel Adapters. If you are studying for VCAP you will be familiar with this process. For vMotion the vMotion Traffic service must be enabled.


Create and manage advanced resource pool configurations

Resource pools allow for flexible management of resources, resource pools can be grouped into hierarchies to partition available CPU and memory resources in the cluster. A resource pool can contain child resource pools, VMs or both.

When a new resource pool is created you can configure the following

  • Shares - Specify shares for this resource pool with respect to the parent’s total resources. Sibling resource pools share resources according to their relative share values bounded by the reservation and limit.
  • Reservation - Specify a guaranteed CPU or memory allocation for this resource pool. Defaults to 0. A nonzero reservation is subtracted from the unreserved resources of the parent (host or resource pool). The resources are considered reserved, regardless of whether virtual machines are associated with the resource pool.
  • Expandable Reservation - When the check box is selected (which it is by default), expandable reservations are considered during admission control. If you power on a virtual machine in this resource pool, and the combined reservations of the virtual machines are larger than the reservation of the resource pool, the resource pool can use resources from its parent.
  • Limit - Specify the upper limit for this resource pool’s CPU or memory allocation. You can usually accept the default (which is unlimited). To specify a limit, deselect the Unlimited check box.

In this example I will create a Production resource pool with 8000 shares in CPU and memory and a Dev resource pool with 2000 shares in CPU and memory to represent 80/20 split. From the Actions menu on a host in a DRS cluster I select New Resource Pool. I give it a name, I will leave limits and reservations alone and manually set the share value to 8000 and 2000.

The numbers dont mean anything they just add up and divide by the amount of resource pools configured. Bear in mind these are only applied under contention and be aware of the number of VMs you assign to a resource pool - for instance if I use my example and I have 100 VMs with 80% of the resource but only 2 VMs in Dev which has access to 20% of the resources suddenly you wont get the desired affect.

There is a lot of good information out there regarding resource pools, for the exam you will most likely be given a set of requirements that you will need to complete, I showed a brief example but if you are unfamiliar with resource pools go and lab it until you are.

Leave a comment

Your email address will not be published. Required fields are marked *