IBM and Rambus to boost hybrid memory 

IBM and Rambus are to collaborate on research into hybrid memory systems. At the moment, DRAM is the fastest way to deliver data to CPUs but it is expensive and limited in size. Cheaper memory options such as Flash (NAND) have greater capacities but are much slower.

According to Laura Stark, senior vice president and general manager of the Emerging Solutions Division at Rambus: “The exploding volume of data and rapidly evolving workloads for Big Data applications are placing tremendous pressure on data center memory systems for increased performance and capacity. This project with IBM demonstrates our ongoing collaboration with the industry to accelerate the development and adoption of advanced memory solutions.”

The demand for faster memory

Laura Stark, senior vice president and general manager of the Emerging Solutions Division at Rambus
Laura Stark, senior vice president and general manager of the Emerging Solutions Division at Rambus

Over the last few years there has been a lot of work to try and speed up memory and IO systems to deliver data as fast as processors are able to consume it. Big data, Artificial Intelligence (AI) and machine learning systems are all greedy when it comes to memory. They can often become memory bound leaving a lot of processor cycles sitting around unused.

One of the specific challenges here is that as the thread count on Power-based chips increases, there is often not enough memory available to keep each thread running. Add in the GPUs that are doing a lot of parallel processing and the threads that they possess and the amount of memory per thread overwhelms DRAM which is why other memory systems are required.

To overcome this IBM developed CAPI (Coherent Accelerator Processor Interface). This allowed non DRAM memory devices to be directly attached to the processor. It provides a high speed bus to overcome some of the speed limitations of relying on NAND and the PCI bus. The combination of CAPI and NVLink also means that the GPUs are able to access memory directly using their own bus links. This puts further pressure on DRAM capacities and highlights the need for a faster hybrid solution.

What IBM and Rambus are now looking at doing is taking advantage of OpenCAPI to expand the capacities of memory to as close to DRAM as they can get. As part of this announcement, Rambus has become the latest company to join the OpenCAPI Consortium. It has also become a member of the OpenPOWER Foundation.

Focusing on OpenCAPI means that this is a solution that is focused on IBM POWER-based systems. It is not a platform agnostic solution yet. However, the involvement of Rambus suggests that there is an opportunity to make a solution available to other architectures.

What does this mean?

System design is all about compromise. No matter what you do there will always be a bottleneck. The art of system engineering is moving the bottlenecks around to get the least worst impact.

When IBM introduced CAPI it opened a door to overcoming the lack of DRAM in a system. Allowing flash storage to be directly connected to the CPU was a major jump. Since then, other vendors have introduced solutions. The Intel Octane and Samsung ZSSD solutions are being used by companies to try and overcome DRAM limits. However, they in turn are bound by the PCI IO bus.

IBM and Rambus are already looking beyond the use of NAND based devices connected through OpenCAPI. IBM and Rambus intend to look at a combination of other memory technologies. This includes phase change memory (PCM,) Resistive RAM (ReRAM) and Spin Torque Transfer Magnetic RAM (STT-MRAM). These are all high-capacity and low-cost alternatives to DRAM. If they can get close to the speed of DRAM using the OpenCAPI technology it will have a significant boost on IBM’s Power-based computer sales.


Please enter your comment!
Please enter your name here