Knowledge Base:  
You are here: Knowledge Base > Techie Tips
Is Your Virtual Environment Running on the Right Hardware?
Last Updated: 07/20/2018

Is Your Virtual Environment Running on the Right Hardware?

by Ben Yampolsky

By now, many of us have firsthand experience with the benefits achieved through virtualization. Whether eliminating environment complexity, increasing flexibility, maximizing resources, or saving money, virtualization has become the standard rather than the cutting edge. Before one can jump on the virtualization bandwagon, there are a few hardware choices that must be made to ensure the foundation is strong. Future proofing your hardware will go a long way to saving long IT hours when production is on the line. Here’s how to meet the infrastructure challenge that virtualization throws your way.

#1: RAM’s the Name of the Game

In modern hardware environments where the current generation of CPU sports double the cores of the last, the first physical resource you will run out of is RAM. System memory is one of the key considerations when selecting the underlying host hardware and will more often than not end up as your limiting factor for the total number of VM’s that your environment can support. It’s no wonder that, in the midst of VMware’s licensing debacle in 2012, VMware attempted to charge for the amount of physical RAM assigned to a host - before the masses rebelled.

We recommend workhorse systems from Dell or HP that use DDR3 ECC RAM, for example Dell’s 11th+ or HP’s G6/G7 generation servers. To determine the total amount of system memory to shoot for use the following rule of thumb: 6 - 8GB memory per Intel or 2 - 4GB per AMD CPU core. As an example, if the server platform you are looking at supports dual 6-core Intel CPUs, you would multiply 2 x 6 x 8 to arrive at 96GB of system RAM.

#2: Compute This

Intel and AMD have developed technologies for optimal CPU performance in virtualized environments. Examples of these features include hardware-assisted CPU virtualization (Intel VM-X and AMD-V) and Memory Management Unit (MMU) Virtualization (Intel EPT and AMD RVI). Using these technologies, VMs can share CPUs the way vacationers buy timeshares: considering that Intel CPUs use hyperthreading, which creates 2 logical CPUs per physical CPU core. A good rule of thumb is as follows: allow up to 1 - 2 VMs per physical CPU core for VMs that are compute intensive; 2 - 4 VMs per physical CPU core for VMs that have intermittent use or low workload averages. For example, if you are virtualizing server operating systems that will support multiple users, a physical host with dual 6-core Intel CPUs would be capable of supporting a max of 2 x 6 x 2 x 2 to arrive at 48 VMs.

#3: Networking Isn’t Just for Socialites

The current state of networking hardware is in a transitional state - that is to say a majority of networks are still using 1Gbps hardware whereas many of the newer servers and networking products have moved to 10Gbps or faster. The key to deciding which way to go for someone trying to decide if the faster hardware is a better investment is to look at the newer software layer storage technologies that are flooding the market from Microsoft, VMware, and other vendors. These OEMs are banking on the new 10Gbps standard to enable efficient network multi-tenancy and interchangeable network storage endpoints. The new technologies are unlocking a new world of storage options as well as addressing a key bottleneck of high-capacity VM servers: how to squeeze so many data streams onto a single physical pipe.

#4 It’s Not A Bug Spray

When moving massive amounts of data internally from RAM to storage, a high-end RAID controller will pay dividends to overall server performance. The amount of RAID cache and technologies that take advantage of storage tiering - like CacheCade - will often double overall server performance and make a significantly higher contribution per dollar spent than virtually any other system component.

Many VM workloads are very sensitive to the latency of I/O operations. It is therefore crucial to spread I/O over multiple available paths to storage, whether that be multiple HBAs to external storage, or higher number of spinning disks in a high performance RAID configuration such as level 10. Capacity needs will vary greatly, depending on VM applications, but adding additional capacity is easier than improving the performance of the underlying storage subsystem once the host is deployed. Don’t forget to account for future data growth by figuring a minimum of 20 - 30% additional usable capacity for VM files and snapshots. Figure on a mix of flash, SAS, and midline drives in your storage lineup to properly match your I/O workload with the underlying spindle or memory performance.



Was this article helpful?

Comments:
 

Related Articles
 > Consensus On Hybrid Cloud
 > The Desktop Peacefully Passes
 > Tiered Storage
 > Big Data? Big Whoop!
 > Preparing To Switch
 > The Next vWave
 > Maximize Utility
 > Destruction Via Encryption
 > Ubiquitous Communication Through WebRTC
 > Windows XP, Target Data Breach, and Cautionary Tales For CIO Hopefuls
 > Windows XP PC's a.k.a. The Walking Dead
 > Top Questions for Your Next Storage Vendor
 > If You Don’t Have Solid-State Drives, You’re Missing Out
 > Storage and Networking Convergence