Starting an AI project requires more than just an idea; it demands precise planning and a crystal-clear vision of what the business wants to achieve. Defining goals from the outset— right from the business case to user expectations—sets the course for the entire project. This clarity guides everything from the data you need, to how you protect it, while also giving developers the structure needed for building front-end applications.
To understand more about what is involved, Enterprise Times talked with Nitesh Bansal, CEO and MD of R Systems. Bansal drew on the decades of experience that he has from working in the industry and helping customers design and build such systems.
Building an AI-based solution requires large amounts of data. Most of that data is held across multiple repositories inside the enterprise. Accessing that data presents challenges not just for creating the AI but also for utilising it.
Traditional data security has revolved around securing the data based on the role and groups a user is in. The logic of who gets access is often driven by ‘grant the minimum and most necessary’ to get the job done. It is one of the main reasons why cross-functional intelligence gets missed out, and that too is by design.
When we give an AI unfettered access to data repositories, we are making a fundamental change to that long-held security practice. Without teaching the AI about specific security models applicable within the organization, when users query the AI application, it doesn’t necessarily know what should be included in the response. How should we be approaching this?
Begin with a clear definition of what the AI model will deliver
Before doing anything with the data, Bansal says customers need to think what they want from an AI project. He says, “when customers come and talk about an AI project, usually what they have is a partially baked idea or just a strategic push from the top to do something on AI.”
Part of refining the expectation of an AI project is to understand what value chain and which actors are we going to impact. It is essential to gain an understanding and preparing the data the AI will access.
The commonly accepted view is that data has to be very clean, with organisations aiming for 95% accuracy. “Unfortunately, it rarely happens despite all the deduping technology and algorithms. In the beginning, you will get in the upper 70s or lower 80s accuracy. However, model accuracy will improve over time“, says Bansal.
When it comes to what data the AI will need access to, Bansal takes a different view from many experts. Most experts say give AI access to all your data. Bansal disagrees with that. He says that 50 -70% of an organisation’s data can be filtered out. It does not need to be consumed by the AI model because it holds no value and makes the AI too costly to operate.
Bansal commented, “we need to understand the level of data we need. For example, if a telecom company wants to improve the quality of service in a particular zip code, then knowing where the customers reside, is definitely going to be important. But knowing their name may not be.
He compares that to the level of data a call centre application would require. In that instance, it would need location, name of customer, account details and other data. Much of that is likely to be Personally Identifiable Information (PII). This is where Bansal turns the focus back to the question of secure access to the data.
IT wants to lock data up but there is an alternative
When IT worries about security, it reacts by locking the data up. Effectively, it either anonymises everything or will not let the AI consume any PII that is not anonymised. While that reduces risk, especially from a regulatory and compliance perspective, it is a far from perfect solution. It limits the ability of the AI to interpret data and draw inferences between the different pieces of data.
Bansal sees an alternative solution. It means treating the AI as any other data store. While it might not have a security model built-in, he sees a solution where the same data access models can be applied. He says, “People only access it via the approved interfaces or APIs that you’ve developed for authorised users.
“It consumes the data, but what people can consume from it gets a layer of security access control mechanism. The AI isn’t prevented from working with all the data, nor is there a massive amount of cleanup and anonymisation.
“We limit the inflow by removing anything that is unnecessary, especially the PII, if not relevant to the use cases delivered. We regulate the outflow by securing the data model and creating the right kind of APIs or conversational interfaces that will not provide a response, which is security and access sensitive.”
An elegant solution
There is an elegance to what Bansal describes on many levels:
- As Bansal has described, this resolves the security of the data. Importantly, it does so by not making the organisation reinvent its security model. It can use existing APIs to control data access as it has always done.
- Once developed, the API based security can be applied to multiple AI models. Importantly, it also allows Partners to use the data without compromising data access and security.
- Modern IT systems are increasingly API enabled. It not only speeds up development but allows greater access by users. The use of low-code/no-code solutions that empower users and business analysts to build applications is becoming commonplace. It takes pressure off software developers and allows business units to develop new solutions to stay competitive.
This approach also delivers another benefit. Bansal points out that the data being used by the AI has been in the organisation for some time. While the AI might be exposing new connections between the data, it is still the same data used across many other apps in the organisation. He also believes this invalidates any questions about the AI having access to PII.
Conclusion
Like many technologies before it, AI is being rushed through. There is often little thought as to the goals, the benefits and the consequences. Bansal believes that organisations must make a clear business case for its use. They then need to properly define projects to ensure they deliver on expectations without getting lost in the hype.
He also believes that the tussle between lock the data up for security vs more the merrier for training the model can be solved by treating the AI as another data store. It means IT does not need to create a new security model. Using security at the API layer delivers business benefits without compromising data security.
R Systems is a leading digital product engineering company that designs and develops chip-to cloud software products, platforms, and digital experiences that empower its clients to achieve higher revenues and operational efficiency. Our product mindset and engineering capabilities in Cloud, Data, AI, and CX enable us to serve key players in the high-tech industry, including ISVs, SaaS, and Internet companies, as well as product companies in telecom, media, finance, manufacturing, health and public services verticals.
R Systems uses its expertise in automation and integration including RPA and No-Code-Low-code platforms to help enterprises across these verticals achieve their OKRs.