The last 18 months have seen the rise of the LLM and generative AI. For many organisations, LLMs bring with them risks and concerns over the potential leakage of data. Appian is removing that concern so that customers can focus on their business. The key is its data fabric. It helps build LLMs and ensures that data is never exposed.
To understand more, Enterprise Times editor Ian Murphy sat down with Michael Beckley, CTO and co-founder of Appian, at Appian World 2024 in Washington. Beckley said, “Once you have an LLM, can you keep it private and keep the data you share with that LLM private?”
For those using public LLMs, that is a significant challenge. Beckley continued, “Appian’s version of private AI, uses models, which may be familiar to you most often. But the real advantage of it is, thanks to our data fabric technology, we’re able to keep all of your data and the LLM within a security perimeter.” That perimeter, according to Beckley, means that customer data will not be used to train a public LLM, meaning “data could never be spilled or shared inappropriately.”
Appian is also making it easier to use AI and LLMs by moving prompt engineering into the platform. It examines the questions that users ask to see how efficient they are. According to Beckley, “if a user is asking a question in a way, which is going to bring back too much information, it’s the burden of the platform to flag it and tell you, there’s a better way to ask this.”
To hear more of what Beckley had to say, listen to the podcast.
Where can I get it?
You can listen to the podcast by clicking on the player below. Alternatively, click on any of the podcast services below and go to the Enterprise Times podcast page.