Knowledge Base:  
You are here: Knowledge Base > Techie Tips
Storage and Networking Convergence
Last Updated: 07/20/2018

Storage and Networking Convergence

Last month at the HP Discover conference in Las Vegas, HP unveiled a datacenter switch offering all 40GbE ports with optional 100GbE interfaces. This ties in well to hypervisors that have added 10Gb Ethernet requirements to support the higher uplink I/O resulting from higher VM density. Here at Aventis Systems, we are seeing the foot of the 40GbE adoption wave. While HP's new product offering has the spotlight in the Ethernet arena, a seldom-known company is making waves with its InfiniBand products.

Mellanox is a small Israeli company making big waves in the 40Gb / 56Gb networking space with its line of managed and unmanaged InfiniBand switches. What makes their offering interesting is InfiniBand's suitability for clustering VM hosts. Since InfiniBand guarantees in-order data delivery, this protocol carries significantly less overhead dropping network latency into the nanoseconds. This, and the higher-speed networking, make it ideally suited to perform switch operations among VM hosts.

The current edition of hypervisors from both VMware and Microsoft recommend 10GbE networking between hosts to support Storage vMotion. As customers investigate running storage appliances that use in-host spindles over expensive traditional storage units, networking becomes the critical point of data convergence. Host NICs provide shared paths for both iSCSI/FCoE and application traffic.

To alleviate this congestion, enter 40Gb InfiniBand switches that provide host to host networking at several factors faster speeds than the aging 1Gb NIC interfaces still found on the majority of servers. This type of configuration allows storage appliance software like VSAN from VMware to take advantage of all that extra bandwidth.

VMotion, Storage vMotion, fault tolerance, and iSCSI/FCoE traffic get a fast lane to speed up critical cluster operations. Meanwhile, server NICs get a break by supplying dedicated paths for application data.

If your current infrastructure is using multiple dual or quad NIC arrangements to keep up with all of your VM traffic, you should invest in the next generation of networking devices. Not only has Mellanox leapfrogged the 10GbE switches from other OEMs in terms of speed, throughput, and latency, but I've saved the best for last, the InfiniBand switches are also extremely undervalued at their current price points.

Was this article helpful?


Related Articles
 > Consensus On Hybrid Cloud
 > The Desktop Peacefully Passes
 > Tiered Storage
 > Big Data? Big Whoop!
 > Preparing To Switch
 > The Next vWave
 > Maximize Utility
 > Destruction Via Encryption
 > Ubiquitous Communication Through WebRTC
 > Windows XP, Target Data Breach, and Cautionary Tales For CIO Hopefuls
 > Windows XP PC's a.k.a. The Walking Dead
 > Top Questions for Your Next Storage Vendor
 > If You Don’t Have Solid-State Drives, You’re Missing Out
 > Is Your Virtual Environment Running on the Right Hardware?