This week is the inaugural OmniSci Converge user conference held at the Computer History Museum in Palo Alto. OmniSci, previously known as MapD, is a big data and analytics player. Its primary focus is on ultra large data sets which is a growing challenge for a number of industry verticals from oil & gas to retail, pharmaceutical, defence and aerospace. Key to its ability to do this is its architecture.
To understand a little more about that architecture and how the platform works, Enterprise Times talked with Ashish Bambroo, Vice President of Business Development. Big data and analytics is an increasingly crowded space. Differentiating one solution from another is not simple. Bambroo said: “Large is subjective.” He went on to clarify this by saying: “In our definition, large starts at hundreds of millions of records and when it’s in the billions, that’s when our product performs well.”
This is why OmniSci has built its solution for the GPU architecture. The number of cores deliver massively parallel computing and with ultra high-speed local memory, there is no need to pre-process data or even to work with subsets. This is the problem for many legacy solutions. Getting that data to the GPU, however, is not easy. Bambroo talks about how OmniSci is looking at different memory fabrics. It is also a distributed solution allowing it to spread the load across multiple machines which increases the number of GPUs it can take advantage of.
Bambroo also talked about why OmniSci is delivering a version just for CPUs and the partnership with Intel.
To hear what else Bambroo had to say, listen to the podcast.
Where can I get it?
obtain it, for Android devices from play.google.com/music/podcasts
use the Enterprise Times page on Stitcher
listen to the Enterprise Times channel on Soundcloud
listen to the podcast (below) or download the podcast to your local device and then listen there