There is no doubting the fact that technology is designed to be an enabler: to do things faster, to do things easier and to do things more cheaply. But not all technologies roll out of a box, get installed and immediately reinvent a company.
One technology area that this most definitely applies to is the world of In-Memory Computing (IMC) architectures. If you boil it down the idea is a simple one: build a distributed computing platform that has the ability to scale out to handle hundreds of millions of transactions per second. Such a platform can even handle data sets which are petabytes in size.
So why do this? Quite simply to reduce query times by lots without the need to change the underlying data or application layer. Sounds wonderful doesn’t it? But, and it is a big BUT, it isn’t easy to implement.
“The biggest growth inhibitor is the inherent complexity of this type of software. Not only it is distributed in nature, but it also requires a significant shift in how one thinks about building applications. You’ll win in the end – but it will come at a cost,” explains Nikita Ivanov, Chief Technology Officer of GridGain.
If you push Ivanov on the adoption of IMC, he will tell you quite openly that explaining the technology to IT folk was once a hardship, but that now it is “much less of a problem.” But he then delivers the fact that making the business case for IMC is where the problems arise.
“This is a lot more of a complicated issue. Most In-Memory Computing software is still sold to tech/IT people first,” adds Ivanov, alluding to the fact that it is the techies who buy into ICT and not the managers signing off the projects.
Whilst it is always fun to get a techie view on a business, it is also prudent to ask the CEO of a vendor what they think. Abe Kleinfeld, CEO of GridGain, has this to add:
“What has held IMC back is largely perception. For some, it is perceived to be esoteric, expensive, and many people still perceive it to be simple caching, rather than a system of record that you run your business on. However, these perceptions are quickly fading away, driven largely by the speed and scale requirements of modern applications and the demands of digital transformation projects.”
So, what about getting past the hurdle of getting a project signed off?
Kleinfeld answers, “It’s not hard nowadays. IMC is becoming quite common and so it’s more about explaining why one IMC approach is better than another.”
And as to the business case, Kleinfeld has this to say, “It’s not hard because IMC comes into play when all the easier alternatives (SSDs, database optimisations, etc.) have been exhausted. When all the usual approaches have failed to meet SLAs, IMC becomes a very obvious choice. Also, the big analyst firms, (Gartner, etc.) all promote the use of IMC now.”
So, where is IMC being adopted?
It does not take a rocket scientist to tell you that IMC is being embraced by the financial services and the tech sectors. As more companies proceed through their digital transformations, it will, if you listen to GridGain, become the normal way to do computing.
Faster adoption can be seen in sectors like telecom, retail, advertising, logistics and transportation, and healthcare. What often drives IMC adoption is competitive pressure. Most often, customer experience is the key driver for IMC speed and scale. Which means, once an early adopter begins to see business results, competitors will quickly follow suit.
Kleinfeld adds that regulatory compliance is now driving banks to assess risk in real time on every transaction, something that requires IMC’s immense speed and scale. And in tech (particularly cloud and web-scale SaaS solutions), scale requirements can quickly overwhelm vendors and the only way to address their needs is via distributed, in-memory architectures.
What are the inherent weaknesses of IMC?
When you pose this question to a CTO like Ivanov he leaps in to action with a raw techie answer. “I don’t think there is a problem or problems that when fixed will all of a sudden boost the adoption of IMC. As ugly as it is – this is the only game in town when you need 10-100x performance improvements – quite literally there’s nothing else at all.
This means IMC adoption will grow more naturally as more and more apps and systems are facing significant performance problems with traditional RDBMS technologies.”
Now ask the same question to the CEO:
“Virtually all of the historical issues with IMC have been resolved by now. Loss of data due to power outage, high availability, etc. are all now behind us. The last major issue was recovery time – the time it takes to load data into memory from a cold start. That time could be substantial with large data sets. But now we can now operate off disk like a traditional DB until the data is loaded into memory.”
“So, the system will run at the speed of SSD or disk initially, but quickly speed up as the memory warms up. Today, despite many continued misperceptions, virtually every architectural weakness that IMC once had has been addressed. It can absolutely be used as the system of record for mission critical transactional, analytical and hybrid use cases. It works perfectly well on-premises and in the cloud. It can support five-nines of availability. And because of open source and commodity hardware, it is highly cost effective.”
Kleinfeld was starting to sound more techie that CEO with his answer, but that sort of sums up IMC. It is a very technical subject that has a most defined business impact. You simply cannot not sound technical when you look to define why the architecture is so important.
Why does IMC matter?
If you look at the IMC industry as a whole there are many significant players who will tell you their architecture is the best, is the easiest to deploy. For example, Oracle recently used a benchmark it wrote itself to justify the performance of its own TimesTen Scaleout in-memory database.
GridGain has built its distributed architecture on top of the Apache Ignite open platform and is experiencing great success with its software, but it is still one of the smaller players in the industry. You could therefore speculate that the company and its technology are prime for acquisition. When you ask CEO Kleinfeld that question he answers very openly:
“That’s always a hard question to answer. For the most part, every company has a price. If investors’ perceived valuation is met or exceeded by an acquirer’s offer, then often a transaction can take place. But other considerations come into play. For example, if the company is growing quickly, executing well and the market still has substantial room to expand, investors may choose to not sell because value creation is happening at a rapid pace and it makes financial sense to continue to build that value.
Other strategic alternatives also come into play, i.e., rather than selling the company, perhaps acquiring other companies to grow the product or revenue footprint faster can make sense. So, there’s never a simple answer to this. As CEO I’ve found it is best to focus on building a successful business, and inevitably great opportunities and outcomes will reveal themselves.”
Time to leave the last word to CTO Ivanov, and allow him to comment on the Ferrari/Hyundai headline for this article:
“The analogy is pretty simple and colourful at the same time… In a world where everybody is driving Hyundai and struggling with performance – we are selling a Ferrari engine and asking everyday drivers to buy this expensive new engine and just swap it. When it’s all said and done – you’ll be going really fast. But the process of dropping a Ferrari V12 into a Hyundai chassis is nothing short of complex. And here lies our perpetual problem…”
And we are all the wiser for that…
Note: Apache Ignite is a memory-centric distributed database, caching, and processing platform for designed for high volume transactional, analytical, and streaming workloads, delivering in-memory speeds at the petabyte scale of data. Ignite is developed by the Apache Software Foundation.