AIM Security has closed a US$18 million Series A funding round as it seeks to secure generative AI enterprise adoption. The funding round was led by Canaan Partners. It also saw seed investor YL Ventures also contribute. It brings the company’s funding to $28 million.

Joydeep Bhattacharyya, General Partner at Canaan, commented, “Generative AI is transforming industries from finance to healthcare. However, this power also comes with significant risks and the need for robust security solutions developed specifically for generative AI has never been more urgent.

“Aim Security has quickly moved to the forefront of this industry through its rare combination of expertise, customer-centric ethos and impressive revenue growth. I’m confident Aim Security is uniquely positioned to enable enterprises to harness the transformative power of AI, securely and at scale.” 

What is the problem AIM Security is seeking to address?

AI is flooding into the enterprise. At one level, AI features are being added to existing applications. At another level, AI-driven applications such as chatbots, AI assistants, and generative AI are changing how users work. Enterprise software teams are also under increasing pressure to add AI features to the software they are writing.

The problem with all of this, says AIM Security, is that “security leaders find themselves caught in a perpetual cycle of “playing catchup” in addressing the unique data, privacy and security challenges AI technology introduces. The result is either blocking its use entirely, negatively affecting business and efficiency goals, or placing the organization at risk.”

Those security challenges have been thrown into stark relief by several recent announcements. One of the biggest of these was OpenAI admitting that it cannot correct bad data inside ChatGPT and that data cannot be removed from it. For enterprises, this poses a significant problem as users adopt public gen AI solutions. Many are unaware that all the data they use with those solutions is captured and added to the data corpus the AI holds.

Additionally, AI solutions are prone to a host of issues, such as hallucination and the creation of fake data. A recent X (formerly Twitter) thread lists the responses by Google’s latest AI solution to a series of questions. The answers are not just wrong but, in some cases, are shocking in their nature.

How does AIM Security solve this problem?

AIM Security claims that its security platform can secure all forms of AI and provide governance. It says that it is on top. It says its platform is “specifically tailored for AI’s unique threats, including sensitive data exposure, supply chain vulnerabilities, harmful or manipulated outputs and the emergence of attack methods such as jailbreaks and prompt injection.” 

AIM says it does this using a four-step process: Discover, Detect, Enforce, Protect.

Discover

AIM searches for apps that are AI-enabled. How is does this is not clear from the website. Does it have a list of commercial apps that it looks for? How does it keep that up to date? How will it recognise internal AI-enabled apps?

Once it has identified an app, it produces a range of statistics, such as risk scores, based on the data that the app uses and acquires. The latter is important because few people realise what is taken by apps when you utilise their AI features. This generates a serious security risk.

App owners will point out that their apps have a privacy policy that says what data they collect. However, in a world dominated by SaaS products, few people, even inside security teams, bother to read the privacy policies.

Detect

Data used by AI-enabled apps is fully audited. It is not just a case of the volume of data and where it is stored. AIM captures the prompts used with the apps. It allows organisations to review all prompts and see if they comply with corporate policies.

Of particular interest is the claim that it identifies malicious prompts to identify gen AI attacks. It then aligns those prompts with the type of data that it detects being used to provide a detailed picture of risk. For many organisations, this is potentially the first sign they have of any problems.

Enforce

Auditing is a key part of enforcing policy and compliance. AIM captures all the data used by the AI and in the prompts. It then aligns that data with a range of compliance laws, including GDPR, the EU AI Act, the US AI Executive Order, the NYC AI Bias Law and others.

It will be interesting to see how well that captured data aligns with the various pieces of legislation and how it interprets that data. For example, how does it detect bias? This is an observability issue and is not just about the AI but the underlying data set. The website gives no indication of how to identify bias in core data. Without that, it is questionable as to how effective it will be in identifying bias introduced by the AI.

Protect

This is the least well-explained part of the AIM solution. It talks about securely connecting data assets to your organisational LLM. That implies that it is aimed at internal projects. If it is monitoring external app AI-enablement tools, there is no guarantee that there are organisational LLMs in place.

Of more interest is the claim that it will help organisations establish gen AI security best practices with the models you use and build. Once again, this seems aimed at your own LLMs and not how to establish security controls on third-party applications.

Enterprise Times: What does this mean?

Solving the challenge of gen AI in the enterprise is something that a number of vendors are trying to address. Some vendors focus on the quality and security of the underlying data. Others are helping customers build internal gen AI solutions based solely on their own data. Where those solutions need to be enhanced, they provide the means to get data from outside, but mark it as being external data.

AIM Security is approaching this in the same way as any traditional security company. It has built a security platform and is using that to manage data security, privacy and risk. It wants to be the trusted AI security ally that security teams will rely on. However, as already noted, there are a number of unanswered questions at the moment.

Despite those questions, AIM Security is going to find a ready market, especially as customers go through a POC and discover how much AI-enablement already exists in their organisation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here