Networking will take cloud to new levels of efficiency
There are a few parameters that IT managers need to consider this year when choosing the right networking for its next-generation deployments.
The first, according to Motti Beck, Director of Enterprise Datacenter Marketing at Mellanox Technologies, is networking speed.
"Although today, 10GbE is the most popular speed, the industry has already started to realise that 25GbE provides higher efficiency and that it is fast becoming the next 10GbE. 25GbE will become the new 10GbE in 2017," states Beck.
The main reason behind this prediction is the much higher bandwidth that flash-based storage, such as NVMe, SAS and SATA SSD, can provide; and a single 25GbE port can almost replace three 10GbE ports.
Hello SD-WAN! Legacy WANs just do not cut it in the cloud
"This by itself, can cut the networking cost threefold, enabling the use of a single switch port, a single NIC port and a single cable instead of three each," continues Beck. "As IT managers start looking at budget, they know this is where they should be allocating their networking rands."
He points out that further examples where higher bandwidth enables higher efficiency, and says this will be happening more and more in VDI deployment, where a 25GbE solution cuts the cost per virtual desktop in half, or when deploying new infrastructure that has to support modern media and entertainment requirements, and 10GbE can't deliver any additional speed to support the number of streams that are required for today's high definition resolutions.
"As IT managers revisit those all-important 2017 budgets, ROI will become more and more important as companies are increasingly unwilling to take the performance-cost trade-off. Essentially, this year, IT managers and their companies want to have their networking cake and eat it too," says Beck.
However, deploying higher speed networking speed is just one way that IT managers can take their cloud efficiency to the next level. Beck highlights that as IT managers consider their options, they should also use networking products that offload specific networking functions from the CPU to the IO controller itself. "By choosing this solution, more CPU cycles are going to be freed to run the applications that will accelerate the job's completion and enable using less CPUs or CPUs with less cores. Ultimately, in 2107, the overall licence fees for the OS and the hypervisor will be lower – both, of course, will increase the overall cloud efficiency," he says.
There are several network functions that have already been offloaded to the IO controller. One of the most widely used is RDMA (Remote Direct Memory Access), which offloads to the NIC to run the transport later, instead of running the heavy and CPU-demanding TCP/IP protocol over the CPU. "This is the main reason why IT managers should consider deploying ROCE (RDMA over Converged Ethernet)," continues Beck.
"Using ROCE makes data transfers more efficient and enables fast data movement between servers and storage without involving the server's CPU. Throughput is increased, latency reduced, and CPU power freed up for running the real applications. ROCE is already widely used for efficient data transfer in render farms and in large cloud deployments such as Microsoft Azure. Moreover, it has proven superior efficiency versus TCP/IP and thus will be utilised in 2017 more than ever before," maintains Beck.
Offloading overlay network technologies such as VXLAN, NVGRE and the new Geneve standard on the NIC or VTEP on the switch enables another significant cloud accretion. "It represents another typical stateless networking function that, by offloading it, the job's execution time is significantly shortened. One of the good examples is the comparison that Dell published, running typical enterprise applications over its PCS appliance, with and without NVGRE offloading. This shows that offloading accelerated the applications by more than 2.5 times, over the same system, which, of course, increased the overall system efficiency by the same amount," adds Beck.
There are several other offloads that are supported by the networking components, like the offloading of the security functions, for example, IPSEC or erasure coding, which is being used in storage systems.
In addition, a couple of IO solutions providers already announced their next-generation products will include new offload functions, such as vSwitch offloading, which will accelerate virtualisation or NVMeoF offload. Last year, Mellanox announced its ConnectX-5 solution, the next generation of 100G InfiniBand and Ethernet smart interconnect adapter; new networking capabilities have already been added to the leading OS and hypervisors.
At VMworld'16, VMware already announced support for networking speeds 10GbE, 25GbE, 40GbE, 50GbE and 100GbE and VM-to-VM communication over ROCE in their vSphere 6.5. Also, Microsoft, at its Ignite'16 conference, announced the support of up to 100GbE and that for production deployment of Storage Spaces Direct, it recommends running over ROCE. It has also published superior SQL 2016 performance results when running over a network that supports the highest speeds and ROCE. Those capabilities have been included in Linux for a very long time too.
Throughout Africa, Networks Unlimited distributes Mellanox's leading range of high-performance, end-to-end interconnect solutions for data centre servers and storage systems. "The technology is available in our region," says Anton Jacobsz, managing director at Networks Unlimited. "Now, it is up to the continent's IT managers and architects to choose the ideal networking speeds and offloads, enabling their cloud efficiency to soar this year."