AI and security: It is complicated but doesn’t need to be - Photo by FlyD on UnsplashAI is growing in popularity, and this trend is only set to continue. Gartner supports this, stating that approximately 80% of enterprises will have used generative artificial intelligence (GenAI) application programming interfaces (APIs) or models by 2026. However, AI is a broad and ubiquitous term, and, in many instances, it covers a range of technologies.

Nevertheless, AI presents breakthroughs in the ability to process logic differently. It is attracting attention from businesses and consumers who are experimenting with various forms of AI today. At the same time, this technology is attracting similar attention from threat actors. They are realising that it could be a weakness in a company’s security. It could also be a tool that helps companies identify and address those weaknesses.

Security challenges of AI

One way that companies are using AI is to review large data sets to identify patterns and sequence data accordingly. It is achieved by creating tabular datasets that typically contain rows and rows of data. While this significantly benefits companies, from improving efficiencies to identifying patterns and insights, it also increases security risks. Should a breach occur, this data is sorted out in a way that is easy for threat actors to use.

Further threats evolve when using Large Language Model (LLM) technologies. These remove security barriers as data is placed in the public domain for anyone who uses the technology to stumble upon and use. As an LLM is effectively a bot that doesn’t understand the details, it produces the most likely response based on probability using the information that it has at hand.

It is causing many companies to prevent employees from putting any company data into tools like ChatGPT to keep data secure in the confines of the company.

Security benefits of AI

While AI may present a potential risk for companies, it could also be part of the solution. AI processes information differently from humans and can look at issues differently, coming up with breakthrough solutions. For example, AI produces better algorithms and can solve mathematical problems humans have struggled with for many years. When it comes to information security, algorithms are king. AI, Machine Learning (ML) or a similar cognitive computing technology could develop a way to secure data.

This is a real benefit of AI. It can identify and sort massive amounts of information and identify patterns, allowing organisations to see things that they have never noticed before. It brings a whole new element to information security.

AI is going to be used by threat actors as a tool to improve their effectiveness in hacking into systems. However, ethical hackers will also use it to find out how to improve security, benefiting businesses.

The challenge of employees and security

Employees are seeing the benefits of AI in their personal lives. They are using tools like ChatGPT to improve their ability to perform job functions. At the same time, these employees are adding to the complexity of data security. Companies must be aware of what information employees are putting onto these platforms and the associated threats.

As these solutions will bring benefits to the workplace, companies may consider putting non-sensitive data into systems to limit exposure of internal data sets while driving efficiency across the organisation. However, organisations must realise that they can’t have it both ways.

Data they put into such systems will not remain private. For this reason, companies will need to review their information security policies and identify how to safeguard sensitive data while ensuring employees have access to critical data.

Not sensitive but useful data

Companies are aware of the value that AI can bring while at the same time adding a security risk into the mix. They are exploring ways to implement anonymised data to gain value from this technology while keeping data private. One method is pseudonymisation, which replaces identifiable information with a pseudonym or a value and does not allow the individual to be directly identified.

Another way companies can protect data is with generative AI for synthetic data. For example, if a company has a customer data set and needs to share it with a third party for analysis and insights, they point a synthetic data generation model at the dataset.

The model will learn all about the dataset. It will identify patterns from the information. Using that, it then produces a dataset with fictional individuals that don’t represent anyone in the real data. However, it allows the recipient to analyse the whole data set and provide accurate information.

It means that companies can share fake but accurate information without exposing sensitive or private data. This approach allows massive amounts of information to be used by machine learning models for analytics and, in some cases, to test data for development.

With several data protection methods available to companies today, the value of AI technologies can be leveraged with peace of mind that personal data remains safe and secure. It is significant for businesses as they experience the true benefits that data brings to improving efficiencies, decision making and the overall customer experience.

Discover how Protegrity can help your business adapt to AI integrations and the digitalization of our global data environments.

Visit us at www.protegrity.com to learn more about our data protection solutions today.


We are a passionate group of cybersecurity experts building modern data protection solutions that safeguard the world’s most sensitive data. Protegrity’s platform solutions protect the privacy of more than one billion individuals worldwide and empower businesses to innovate and thrive. Our platform free’s businesses from the constraints typically associated with accessing, and fine-grained protection of, sensitive data. Data knows no boundaries, nor should data protection.

Previous articleMedallia launches quartet of AI capabilities and introduces AI governance Council
Next articleTROO Hospitality selects a combination of Percipient with Sage Intacct
Nathan Vega and Clyde Williamson
Nathan Vega has spent a career-defining, building, and delivering cybersecurity products to market. He is passionate about collaboration that builds and engages communities of practice inside and outside of InfoSec. Nathan brings deep experience and expertise in data security and analytics, regularly providing thought leadership on data privacy, precision data protection, data sovereignty, compliance, and other critical industry issues. Before Protegrity, Nathan worked at IBM, where he brought Watson to market as a tool set of Cloud APIs. Nathan holds a Bachelor of Science in Computer Science and a Master's in Business Administration.

Worldwide data security requires all participating organisations to implement the best data security possible for their organisation. In the modern connected world, one networks failure can have an impact across multiple groups and potentially millions of users. Clyde Williamson utilises his experience and knowledge of Data Security, to assist organisations with improving their security posture, policies and practices. Additionally, his experience has provided him with the necessary skills to educate users, developers and administrators on security awareness, policy training and secure coding/administration practices.

LEAVE A REPLY

Please enter your comment!
Please enter your name here