Knowledge
- Compare and contrast virtual and physical hardware resources
- Identify VMware memory management techniques
- Identify VMware CPU load balancing techniques
- Identify pre-requisites for Hot Add features
Skills and Abilities
- Calculate available resources
- Properly size a Virtual Machine based on application workload
- Configure large memory pages
- Understand appropriate use cases for CPU affinity
Tools
- vSphere Resource Management Guide
- vSphere Command-Line Interface Installation and Scripting Guide
- Understanding Memory Resource Management in VMware® ESX™ Server 4.1
- VMware vSphere™: The CPU Scheduler in VMware® ESX™ 4.1
- vSphere Client
- Performance Charts
- vSphere CLI
- resxtop/esxtop
Notes
Identify VMware memory management techniques
http://www.vmware.com/files/pdf/perf-vSphere-memory_management.pdf
Identify VMware CPU load balancing techniques
See the vSphere 4: The CPU Scheduler in VMware ESX4 Whitepaper and PG 73 of the vSphere 4 Resource Management Guide. From the Guide:
NUMA Systems
In a NUMA (Non-Uniform Memory Access) system, there are multiple NUMA nodes that consist of a set of processors and the memory. The NUMA load-balancer in ESX assigns a home node to a virtual machine. For the virtual machine, the memory is allocated from the home node. Since the virtual machine rarely migrates away from the home node, the memory access from the virtual machine is mostly local. Note that all vCPUs of the virtual machine are scheduled within the home node.
If a virtual machine’s home node is more heavily loaded than others, migrating to a less loaded node generally improves performance, although it suffers from remote memory accesses. The memory migration may also happen to increase the memory-locality. Note that the memory is moved gradually because copying memory has high overhead.
Hyperthreaded Systems
Hyperthreading enables concurrently executing instructions from two hardware contexts in one processor. Although it may achieve higher performance from thread-level parallelism, the improvement is limited as the total computational resource is still capped by a single physical processor. Also, the benefit is heavily workload dependent.
It is clear that a whole idle processor, that has both hardware threads idle, provides more CPU resource than only one idle hardware thread with a busy sibling thread. Therefore, the ESX CPU scheduler makes sure the former is preferred as the destination of a migration.
Identify pre-requisites for Hot Add features
A couple of good blogs by David Davis and Jason Boche outline what and how to use Hot-Add/Hot-Plug. The ability to use this without having to reboot the guest virtual machine is extremely limited. ON the Microsoft side Windows 2008 Server Datacenter is necessary to support both features without a reboot while Windows 2008 Server Enterprise edition does not require a reboot for Hot Adding memory. When it comes to removing either hot added memory or hot plugged CPUs a reboot is required for all Windows guest operation systems.
Properly size a Virtual Machine based on application workload
Most physical machines do not need the 8 cores and 16 GB of memory or so they have assigned to them. When bringing a physical system over take note of what is assigned and properly allocate and plan for what is actually needed.
For memory, make sure you have enough to run the applications needed on the server. Avoid memory swapping, but also avoid allocating more memory then is needed. When this is done, the virtual machine’s memory overhead is increased, taking away from the resources that other virtual machines could potentially use.
When sizing the number of processors the same concept applies. If the application(s) can’t utilize more than 2 CPUs then there really is not much good in giving more than two CPUs.
Configure large memory pages
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.1.pdf
Large Memory Pages for Hypervisor and Guest Operating System
In addition to the usual 4KB memory pages, ESX also makes 2MB memory pages available (commonly referred to as “large pages”). By default ESX assigns these 2MB machine memory pages to guest operating systems that request them, giving the guest operating system the full advantage of using large pages. The use of large pages results in reduced memory management overhead and can therefore increase hypervisor performance.
If an operating system or application can benefit from large pages on a native system, that operating system or application can potentially achieve a similar performance improvement on a virtual machine backed with 2MB machine memory pages. Consult the documentation for your operating system and application to determine how to configure them each to use large memory pages.
More information about large page support can be found in the performance study entitled Large Page Performance (available at http://www.vmware.com/files/pdf/large_pg_performance.pdf).
Enabling Large Page Support in Windows Server 2003
http://www.vmware.com/files/pdf/large_pg_performance.pdf
To enable large page support in Windows Server 2003, the system administrator must grant appropriate users the privilege to “Lock pages in memory.” This privilege is not enabled by default when Windows is installed.
To grant this privilege, take the following steps:
- Choose Start > Control Panel > Administrative Tools > Local Security Policy.
- In the left pane of the Local Security Settings window, expand Local Policies and choose User Rights Assignment.
- In the right pane of the Local Security Settings window, choose Lock pages in memory and choose Action > Properties. The Local Security Setting dialog box opens.
- In the Local Security Setting dialog box, click Add User or Group.
- Enter the appropriate user name, then click OK to close the Select Users or Groups dialog box.
Understand appropriate use cases for CPU affinity
- CPU intensive app, move away from core 0
- A good example that requires this is Cisco’s Unity
- No HA if one of the VMs has CPU affinity set.
A must read on this topic is this article from Duncan Epping