04 Sep 2010 by Simon Greaves
Memory in a virtualised environment is split up in three memory types.
Allocates memory through a syscall to the operating system. This runs at the application level in the same way on virtual machines as in physical machines
Runs at the OS level. In simplistic terms it uses an ‘allocated’ and a ‘free’ list. The application asks for memory from the physical memory so the OS puts moves the memory from the free list to the allocated list.
The actual physically installed memory, runs at the Hypervisor level.
VM memory allocation starts with no memory then the hypervisor allocates machine memory to the physical memory. When this memory gets released the free memory remains on the OS and it isn’t returned to the hypervisor.
A more in depth way of looking at this is that the memory is generally administered by what is known as software based memory virtualisation
.
Each virtual machine’s (VM) memory is controlled by the virtual memory manager. (VMM)
The VMM for each VM maintains a mapping from the guest OS’s memory pages called the physical pages to the physical memory pages of the underlying host machine, called the machine pages.
Each VM sees a contiguous zero-based addressable memory space, however the underlying machine memory may not be contiguous, as it may be running more than one VM at a time.
The VMM intercepts virtual machine instructions that manipulate guest operating system memory management structures so that the actual memory management unit (MMU) on the processor is not updated directly by the virtual machine.
The ESX/ESXi host maintains the virtual-to-machine page mappings in a shadow page table that is kept up to date with the physical-to-machine mappings (maintained by the VMM).
The processor uses the translation lookaside buffer (TLB) on the processor cache for the direct virtual-to-physical machine mappings.
Some CPU’s support hardware assisted memory virtualisation
AMD SVM-V and Intel Xeon 5500 series support it. These CPU’s have two paging tables;
one for the virtual-to-physical translations one for the physical-to-machine translations
Although hardware assisted memory virtualisation eliminates the overhead associated with software virtualisation, namely the overhead associated with keeping shadow page tables synchronised with guest page tables the TLB miss latency is significantly higher. As a result workloads with a small amount of page table activity will not have a detrimental effect using software virtualisation, whereas workloads with a lot of page table activity are likely to benefit from hardware assistance.
Transparent memory is on-the-fly de-duplication of memory by looking for identical copies of memory and deleting all but one copy, giving the impression that more memory is available to the virtual machine. You can set the rate with Mem.ShareScanTime
and MemShareGhz
in the advanced options.
Disable with Sched.mem.Pshare.enable
set to false
.
Use resxtop and esxtop to view PSHARE
field of the interactive mode in the memory view.
When creating resource pools the system uses admission control to make sure that you cannot allocate what isn’t available.
Resources are considered reserved regardless of whether VM’s are associated with the pool or not.
The pool can use resources from it’s parent or ancestors if this check box is selected.
When you move a VM into a resource pool its existing reservation and limits do not change. However shares will reflect the total number of shares in the new resource pool.
Futher information can be found in the vSphere resource management guide.
Tagged with: vSphere
Comments are closed for this post.