Neo4j has integrated native vector search into its core data capabilities. It gives customers better insights from their semantic search and generative AI applications. The company also claims that it will “serve as long-term memory for LLMs, all while reducing hallucinations.”

Emil Eifrem, Co-Founder and CEO, Neo4j (Image Credit: LinkedIn)
Emil Eifrem, Co-Founder and CEO, Neo4j

Emil Eifrem, Co-Founder and CEO, Neo4j, said, “We see value in combining the implicit relationships uncovered by vectors with the explicit and factual relationships and patterns illuminated by graph.

“Customers when innovating with generative AI also need to trust that the results of their deployments are accurate, transparent, and explainable. With LLMs evolving so dynamically, Neo4j has become foundational for enterprises seeking to push the envelope on what’s possible for their data and their business.”

The new capability is being rolled out in two products, Neo4j AuraDB and Neo4j Graph Database.

Why do we want Implicit and Explicit relationships?

The short answer is, better quality and accuracy for those building LLMs using data stored in Neo4J’s graph database. Why? It’s all about patterns and context that LLMs rely on when they are delivering responses.

According to a Neo4j blog, “Vector search, combined with knowledge graphs, is a critical capability for grounding LLMs to improve the accuracy of responses. Grounding is the process of providing the LLM relevant information about the answers to user questions before it creates and returns a response.

“Grounding LLMs with a Neo4j knowledge graph improves accuracy, context, and explainability by bringing factual responses (explicit) and contextually relevant (implicit) responses to the LLM. This combination provides users with the most relevant and contextually accurate responses.”

Understanding the difference between explicit and implicit is important as it determines how we treat information. For example, take the statement, made by an adult, “I have a new car.” At face value, it implies that I have purchased a new car but that is an assumption. It might be a car I have been given or loaned. If I modify the statement to “I have bought a new car”, the statement is explicit. The word ‘bought’ is the key here because nothing is implied or assumed.

Looking at a graph database, we see examples of this. For example, the results in a knowledge graph are explicit as they are defined by the known relationships between data objects. But all this gives us is a representation of what we already know . If we use vector search to capture implied relationships, we get a whole new set of data. Neo4j believes that this will deliver enhanced fraud detection.

Digging deeper

To understand more about all of this, Enterprise Times talked with Sudhir Hasbe, Senior Director of Product Marketing and CPO at Neo4j. We started by asking why are vectors important when it comes to Generative AI and LLMs?

Sudhir Hasbe, Senior Director of Product Marketing and CPO, Neo4j (Image Credit: Neo4j)
Sudhir Hasbe, Senior Director of Product Marketing and CPO, Neo4j

Hasbe replied, “Like all Generative AI, large language models are dependent on infrastructure that doesn’t store context It only has knowledge of things that it is interacting with you and only in the context of when you’re interacting with it.

“The problem is, when you have long conversations or in enterprises, and when you want to go out and get consistent results over a period of time, you need long-term memory. How do you store the things that you had asked before or things that you wanted to know and get the same consistent results every time? That becomes a big challenge.

“In that case, vectors are the way the whole system is underpinned in large language models. Everything is stored as vectors and is there. Having an external storage system, where you can go ahead and store those vectors and retrieve it later, gives that long-term memory and consistency for applications.”

The use of vectors for context is not new

Context is a challenge that data scientists have been grappling with for years. Vectors provide a way to make sure that the context of a word is correct. In 2019, Enterprise Times spoke with Mirko Bernadoni, who was, at that time, Head of Data Science at Clifford Chance.

Mirko Bernardoni, Head of Data Science, Clifford Chance (Image Credit: Clifford Chance)
Mirko Bernardoni, Head of Data Science, Clifford Chance

When talking about creating contextual links with a large body of data, Bernadoni said, “You use a number of algorithms. One of these creates a numerical vector for each word. For each word, there can be 300, 700, 1000 points. Each one of these vectors represents the word with each of the variations the machine encountered inside that specific text. In that way, you have the context quality.

“You are able to understand if the word BANK is a Financial institution or a river bank. Vectors allow you to use linear algebra to calculate the distance between the words and say something like ‘If I’m a man and I’m a king and she’s a woman, who is she? And you get the answer, she is a Queen.’”

Hasbe agrees but sees that vectors have become a much more important element than just resolving context for words. He sees knowledge graphs using those vectors for storing explicit relationships in multiple ways. One is to convert that data into entities and relationships, and then store vectorised information on the context between content. He also sees it as allowing you to search on context doing filtering before you move the data into a knowledge graph. It provides a lot richer information for companies.

What about a world of hybrid LLMs?

There is significant concern about the use of public Generative AI and LLMs. Organisations are beginning to ban staff from using them except for specific instances and with limited data. The result is that increasing numbers of organisations are looking at building one or more LLMs in-house and combining that with data from public LLMs. We asked Hasbe how you could use vectors over multiple in-house and public LLMs.

Interestingly, while Hasbe does see that happening, he questions whether you need multiple LLMs due to the complexity of building, tuning and maintaining them. He sees organisations more likely to take a model built by someone else, deploying it in-house and then adding its own data. However, there are challenges with fine-tuning the LLM with that data.

Hasbe describes the technology of LLMs as being “the English [creative writing] class at school. It was not supposed to be factually correct. It was not the math class, it was the English, and so it’s a creative process.”

The challenge with this, according to Hasbe, is that “it will give you more relevant answers in your context, but it will hallucinate. It is not creating an answer, it’s generating a new answer for you every time.”

This is where he sees knowledge graphs as bringing considerable value. They will make it possible to validate the answer against the information that you already have.

Will we have to rebuild our existing knowledge graphs to use vectors?

One of the known challenges with the early days of ML models was how to correct false data without having to unload, reload and relink our data. That has now largely gone away. But, when vectors and Knowledge Graphs span multiple data stores, how easy is it to remove bad data and keep adding new? It is the problem of aggregated data across those data stores, especially if they are stored on different platforms.

Hasbe says the key challenge here is the time and cost required to keep fine-tuning the LLM. Real-time data changes all the time, and he doesn’t see the technology yet at a stage where it is possible to keep an LLM up to date with those changes, at least in the short term.

Instead, Hasbe believes that this is where Knowledge Graphs and existing database solutions are a better solution. Users can go to a Knowledge Graph to ask a question and then refine that search as more information comes from the customer. You could also use natural language interaction and convert that to vectors to do the search. Using vectors and Knowledge Graphs with existing technology is more likely to enable enterprise use cases than using LLMs.

How far are we from replicating human intuition?

As humans, we have these sudden sideway shifts in our head that make no sense when you try and explain them. We call it a gut feeling. Can we start to get to that when we apply vectors over our databases and our LLM? At the moment, we’re still just systematically going in linear lines.

Hasbe replied, “Yes. In the creative process, you can see that large language models are able to go ahead and generate information that seems relatively new and relatively creative. But in a problem-solving space, I don’t think we have gone that far and solved that like the gut or the intuition or whatever humans have, that human nature of problem-solving. I think we may be a little farther away to get to that point.

“Writing a new poem is a very different process than solving a PC problem and trying to figure out why it behaves very differently. How can I go solve something? And why do I care? There is a lot more to problem-solving than just creating new content. At some point, we may get there, but I don’t think we’re anywhere close to that.”

Enterprise Times: What does this mean?

Anything that improves the quality of LLMs is to be welcomed. After all, it is just over a month since Google’s UK boss Debbie Weinstein told the BBC’s Today program that Bard was “not really the place that you go to search for specific information.” She also went on to say “We’re encouraging people to actually use Google as the search engine to actually reference information they found.” It’s a damning statement about the current quality and state of public LLMs.

Neo4j is not going to solve problems with the underlying bad data, but it is offering something different. The ability to find other patterns in data through vectors is going to open up new ways to find answers from its products. More importantly, for those building LLMs and using Neo4j’s tools, this is a significant step forward and opens up new use cases. The company has already provided ROI on some of those, and it looks impressive.

It will be interesting to see what the uptake is and how many customers adopt it. The first opportunity to hear more about it will come at the upcoming virtual conference Neo4j Nodes in October. While virtual conferences are not everyone’s choice, this will give a first opportunity to hear more about the use of vectors and, hopefully, from a customer.


Please enter your comment!
Please enter your name here