Eyal Waldman, CEO, Mellanox Technologies
Eyal Waldman, CEO, Mellanox Technologies

Mellanox has unveiled its latest line-up of networking technology and looks set to shake up the market as it sets out to supercharge the network.

The four main components inside any computing environment are CPU, memory storage and network. CPU performance has had a significant jump lately with lower power, higher core counts and greater clock speeds. Memory is not only getting faster but denser. Storage has seen a continual move from traditional spinning disk to flash-based systems. Compared to all of this, the network, however, has struggled to keep pace.

To address this, Mellanox introduced two new product families at the launch which took place at One World Observatory in New York. The first of the is the Spectrum switch family and the second is the next generation ConnectX-4 Lx.

Mellanox Spectrum built for speed

The Mellanox Spectrum is an Open Ethernet-based switch capable of delivering network speeds of up to 100 Gigabits/second. The target for this switch is datacentres, hyperscale cloud environment and storage systems.

With the introduction of the Spectrum, Mellanox claims to be the only company capable of delivering 10, 25, 40, 50 and 100 Gigabits/second across its network devices. This difference is important because the move to support 100 Gigabits/second puts Mellanox ahead of its rivals in this space. When asked how far, Eyal Waldman, CEO, Mellanox Technologies told press and analysts: “We are now a product generation ahead of everyone.”

One of the challenges of increasing network speed over the last few generations of technology has been increased power consumption. Waldman said: “We are delivering this performance at a lower power consumption than any of our competitors. We are able to deliver 100GbE at just 135 watts. Our competitors are still at 200 watts. This gives us a significant advantage inside the datacentre.

Another key benefit called out by Waldman is that Mellanox has decided to give customers the choice of copper as well as fibre optics. “We can support copper cables of up to 6m. This lowers the cost to customers and improves resilience over the need for fibre optics. With copper we no longer need additional components and avoid the fragility of fibre optic cables inside the datacentre.”

The press release lists a range of other features delivered by the Spectrum:

  • Compliant with 100GbE, 40GbE and 10GbE. Full support for the 25G and 50G Ethernet Consortium specification. Full support for 56GbE operation
  • Integrated 32 100GbE ports, 32 40/56GbE ports, 64 10GbE ports, 64 25GbE ports and 64 50GbE ports.
  • World first non-blocking 100GbE switch across all packet sizes
  • Lowest power consumption: 135W at full 100GbE line rate on all ports.
  • Supports twice the number of Virtual Machines versus competition
  • Sub 300ns cut-through port-to-port latency in full load scenarios.
  • Overlay gateway and tunneling support, including VXLAN, NVGRE, Geneve, MPLS, IPinIP
  • Integrated, dynamically-configurable packet buffer
  • Embedded low-latency RoCEv2 support for high performance storage and compute fabrics

Mellanox supporting open standards

For a number of people the most interesting part of this announcement is not the speed or lower power consumption that the Spectrum is delivering. Waldman announced that the Spectrum family are Open Ethernet switches. This is a very disruptive move and one that could pay off handsomely for Mellanox.

Those outside the network teams look at Ethernet and believe that the network is already an open environment. While you can easily mix and match Ethernet hardware, the problem is management and effectiveness. This is because much of the intelligence in the network is done in software not through the physical interconnect. The result is often a single vendor network architecture with all the attendant risks of being locked in.

By delivering Open Ethernet support with the Spectrum Mellanox is hooking into a move that we have seen with the emergence of cloud computing. OpenStack is now dominating the cloud platform space and the recent Vancouver announcements are aimed at making it fully open and removing any proprietary technology vendors have tried to sneak in.

The same is true in the networking space. OpenDaylight is aiming to deliver an open source, standardised Software Defined Networking (SDN) stack. Part of that is OpenFlow which controls how data is routed. Mellanox is supporting both of these and has built in OpenFlow support to the Spectrum.

But Open Ethernet is more than just the support of some of the open standards. What is means that that the device manufacturer delivers the hardware and an OS kernel. Everything else is just software that uses an Open API in order to work with the hardware. While the Open Ethernet allows the device manufacturer to control the OS Kernel, Mellanox has already been working with other companies to deliver their OS on top of its hardware.

Waldman mentioned their ongoing work with Facebook but was unable to say much because of commercial confidentiality. Meanwhile Canonical have announced that Ubuntu Core will run on the Spectrum switches. All of this is extremely exciting for the open standards community.

Mellanox ConnectX-4 Lx a direct replacement for 10 GbE adapters

The second big announcement of the night was the ConnectX-4 Lx Ethernet adapter capable of supporting 10, 25, 40 and 50 GbE. One of the design goals here was to ensure that customers did not have to swap out existing cabling and connectors as they move to the ConnectX-4 Lx. As a result it supports the current SFP and SFP+ standards.

What this means for many customers is that they can drop this right into their existing server estate either as an upgrade to existing Network Interface Cards (NIC) or as a supplemental card. This ability to deploy simply will appeal to many customers who want to increase performance but don’t want to buy new servers or have to go through a costly recabling of their datacentre.

One of the challenges of previous generations of network technology has been taking advantage of the performance offered by the card. To ensure that the card can be used to its maximum capability Mellanox has ensured that it supports Multi-Host technology.

This means that multiple compute and storage hosts can talk to a single adapter. At present, most networking architectures require each compute or storage host to have their own physical or virtual NIC. There are several benefits to this such as the reduction in the networking overhead and the ability to take advantage of the full bandwidth offered by the NIC.

One thing not talked about at the launch was the fact that multi-host technology is not only part of the Open Compute platform but was given more prominence as part of the OpenStack Juno release. With the support for Open Ethernet inside Spectrum switches and now Multi-Host inside the ConnectX-4 Lx, Mellanox seems determined to be the key platform for those who want an open network.

There are a number of other technologies delivered with the ConnectX-4 Lx including RDMA over Converged Ethernet (RoCE), stateless offload engines and GPUDirect.

The network as part of big data and analytics

RDMA is a technology used in massively parallel computer clusters to move data direct from the memory of one computer to another without relying on the operating system of either computer. A more common use of RDMA is data replication between servers especially in the financial services market for high-frequency trading.

Inside the enterprise, RDMA is beginning to gain ground with the introduction of in-memory databases. These databases seek to write as little information to disk as possible but if data needs to be moved between compute hosts there can be a significant lag between writing out the data on machine one and then importing it into machine two.

To get around this we’ve seen technologies such as IBM’s Coherent Accelerator Processor Interface (CAPI). This allows FPGA’s to appear as if they were part of the main CPU so that data can be read or written to them without using the system bus. As a member of the OpenPOWER Foundation, Mellanox is already providing CAPI support. With RDMA it can go much further buy creating machine clsuters where the data in-memory is being shared across all the computers.

Most software cannot take advantage of RDMA as it likes exclusive control of the data in-memory. One big data and analytics solution that should be able to exploit this capability is Spark which was given a major boost last week by IBM. It uses a technique known as Directed Acyclic Graph (DAG) which means multiple queries can access the same data once it is loaded into memory. Combine RDMA and Spark and it is possible to see the next generation of analytics solutions using multiple physical hosts to solve complex queries.

Providing developers with a platform to build on

One of the big challenges with the explosion of big data and the Internet of Things is the volume of data that has to be moved around the network. With speeds up to 100 Gigabit/second that time is being reduced. But there is another option that is increasingly being looked at but not fully exploited and that is moving the processing to the edge of the network.

The previous generation Mellanox ConnectX-3 has a card with up to four FPGA’s on-board. This means that developers are already able to write applications that can be pushed out to the network card in order to speed things up. In an interview with Waldman, he agreed that there was an opportunity to make this available to the OpenPOWER SuperVessel development platform that was announced last week.

So what could we see? Much of the data that comes into the network via the Internet of Things has limited or even no value. The challenge is how to determine the difference between the noise and the signal. Analytics applications sitting directly on the network interface would be able to refine that data and only move what is important across the corporate network.

There are other possibilities here. The major security companies are focusing in security analytics and security profiling. These two approaches are designed to do real-time analysis of network traffic and determine if that traffic is normal for that service, application or user. If not, then it may indicate data being exfiltrated or a cybersecurity attack in process.

Today, much of the processing to get to that analysis requires additional servers that can examine the data as it flows through them. Even with the network currently running slower than the servers this is difficult to achieve in real-time without creating network performance issues. With network speeds of 100 Gigabits/second it requires a lot of rerouting of data.

By using FPGA’s and having servers that can be embedded onto the NIC, Mellanox is making it possible for security teams to examine data in real-time without impacting performance. It also opens up the possibility for other types of applications to be deployed inside the network. One of those is the ability to do SSL packet inspection. At present this is offloaded to a separate machine but if it can be done on the NIC then it will inevitably be faster.

Another alternative here is to be able to do encryption at the NIC. This means that instead of using CPU and memory cycles inside a server, the NIC card can encrypt data before it is transmitted and decrypt when it arrives back on the server or even end-user device. This would have a significant improvement on security as it would remove the risk of encryption being bypassed in order to speed-up applications and processing.

Conclusion

Mellanox is certainly going to shake-up the network market with these announcements and the speed improvements. However, long-term, it is likely that the ability to deliver applications to the network card may just be the bigger opportunity.

Waldman closed the launch with the tagline:  25 Gb/s is the new 10 Gb/s, 50 Gb/s is the new 40 Gb/s and 100 Gb/s is today. The only question now is how long will it take its competitors to catch up?

LEAVE A REPLY

Please enter your comment!
Please enter your name here