Memgraph 3.0 to streamline the creation of AI solutions (Image Credit: ave-calvar-GUBfjKKkejY-unsplash)Memgraph has announced a major expansion to its technology stack, making it easier to develop enterprise AI applications. The expansion is centred on the core technology in its open-source in-memory graph database. It includes how the data is optimised, the use of GraphRAG and new algorithms. However, the company has stopped short of saying how much time will be saved.

Dominik Tomicevic, CEO, Memgraph (Image Credit: LinkedIn)
Dominik Tomicevic, CEO, Memgraph

Dominik Tomicevic, CEO, Memgraph, said, “For the AI developer, Memgraph 3.0 addresses all the limitations of LLMs (large language models), like hallucinations and inability to keep up with business change.

“With Memgraph 3.0, it’s never been easier to build AI applications. It’s all bundled, off the shelf, and open source. Developers can dive in without hesitation and start creating chatbots and AI agents today.”

Who is Memgraph?

Memgraph is a fully managed, cloud-hosted, open-source graph database-as-a-service solution. It has positioned itself as highly performant and easy to use. Since its formation in 2016, the company has done just two funding rounds. The total raised is $11.5 million, with the last seed round raising $9.34 million in 2021.

The last major release was in October 2021. Since then, the company has issued a number of minor releases, adding features and bug fixes. A lot of that attention has been on ease of use and making graph databases more accessible and understood. There have also been improvements in security and performance.

This is the company’s first major release in over three years. It has decided to focus on better supporting its customers’ AI demands, which requires changes to the core technology. For example, AI demands more reads and writes, which requires better I/O from the database.

Another challenge that Memgraph has highlighted is the size of the context window relative to the size of the dataset. Few database vendors discuss this challenge. It says, “Unfortunately, LLMs have clear limitations, most notably the context window. Enterprise data and knowledge bases that users want to query are many orders of magnitude larger than the size of the context window. They cannot do that as they cannot process vast datasets without losing precision or hallucinating.”

Enter GraphRAG

To solve the problem with the context window, Memgraph is going to use GraphRAG. Retrieval Augmented Generation (RAG) was developed to allow the search of vector databases to identify relevant information. It relies on similarities in the vectors to find what it is looking for. That data is then extracted and used by the LLM to generate a response.

GraphRAG takes a different approach. It uses a knowledge graph, which models complex data often contained in multiple files. The LLM is used to extract a knowledge graph from documents. Using the semantic structure of the data, it looks for connected data nodes at different granularities. That is then used to assist the LLM in responding to the query.

In addition to allowing the LLM to access much larger, more complex, and richer data sets, GraphRAG also reduces hallucinations. It also provides a means by which a user can drill down through the LLM’s response. This is due to the different granularities of data connections that knowledge graphs create.

What’s new in Memgraph 3.0?

In addition to GraphRAG, Memgraph has also added vector search into this release. This allows customers to take advantage of both graphs and vectors. It says, “Graphs and vectors are a perfect match: graphs provide explicit relationships, while vectors encode semantic similarity. Together, they create a powerful retrieval layer, enabling multi-hop reasoning, fast similarity search, and dynamic context refinement.”

For more details on GraphRAG with Memgraph, the company has updated its knowledge base.

It has also made significant improvements to its GraphChat product. GraphChat is a natural language interface that allows users to create conversational queries of the database. It is also part of how Memgraph allows LLMs to query graph databases.

Memgraph Lab now supports DeepSeek models. It will also connect to GraphChat.

There are several performance and reliability updates. Replication recovery has been optimized for more efficient failover handling, ensuring greater system resilience. Query execution is now faster, with improved abort times and better performance under load.

Memgraph 3.0 also contains several security updates. These include updated Python libraries in the Docker package to enhance overall system safety.

For more details on all the updates in Memgraph 3.0, the release notes are here.

Enterprise Times: What does this mean?

This is not just a major update for Memgraph. Its use of GraphRAG and the addition of vector search is interesting. It should give the company a significant advantage over its competitors until they add similar capabilities.

It would have been helpful to see some indication of the increased speeds the company talks about. Maybe benchmark tests will later reveal those, but customers will want to see them before making changes.

Customers will also want assurance that combining GraphRAG and vector search will reduce hallucinations. Many organisations are concerned about this, probably more than about increasing performance.

It will be interesting to see how quickly customers upgrade and how much work is required to upgrade.

LEAVE A REPLY

Please enter your comment!
Please enter your name here