Don’t tell GenAI all Your Secrets: Leverage GenAI without Compromising Security - Licensed under the Unsplash+ License In collaboration with Ave CalvarGenerative Artificial Intelligence (AI) became the latest phenomenon in November 2022 when the artificial intelligence lab OpenAI released a generative AI-powered chatbot called ChatGPT. According to a Reuters report, ChatGPT reached an estimated 100 million monthly active users just two months after launch.

Today, many enterprises are deploying generative AI solutions like ChatGPT. It is being used to automate responses to common questions, code a variety of apps, automate tasks such as writing emails and creating content, and more.

A recent survey by VentureBeat revealed that more than half (54.6%) of organisations are experimenting with generative AI. Importantly, 18.2% are already implementing it into their operations. This enterprise adoption of generative AI technology is fuelling an increase in the generative AI software market. S&P Global projects it will reach $3.7 billion in 2023 and grow to $36 billion by 2028.

GenAI – a promise and a risk

Generative AI technology holds much promise for enterprises. However, there are risks associated with integrating this technology into the enterprise stack. Organisations are concerned about shadow IT as different departments experiment and use it without the appropriate governance and control.

There are also concerns over the potential of generative AI to displace or atrophy human intelligence, enable plagiarism, and fuel misinformation. Indeed, in November 2023, Rishi Sunak held the UK’s first AI safety summit at Bletchley Park. It discussed some of these challenges and the need for government testing of AI models.

The use of any new technology carries some degree of risk. To get the most out of generative AI with the least amount of risk, organisations must take a secure approach to its implementation.

As the rapid adoption of AI in the enterprise continues, questions surrounding the accuracy of the technology and concerns about cybersecurity, data privacy, and intellectual property risk are why organisations like Apple, Samsung, Verizon, and some Wall Street banks are limiting or banning employee use of generative AI technology like ChatGPT.

Cybercriminals are using AI

A Salesforce.com survey of more than 500 senior IT leaders reveals that 67% are prioritising generative AI technology for their organisations during the next 18 months. However, 71% of those leaders believe this technology will likely introduce new security risks to their data.

Cybercriminals are honing their skills in using this technology to bolster their cyberattacks. In a call with journalists reported by PC Magazine, the FBI discussed how generative AI programs are fuelling cybercrime. Cybercriminals are tapping into open-source generative AI programs to deploy malware and ransomware code. They also execute sophisticated phishing attacks and create AI hallucinations.

The ability of generative AI technology like ChatGPT to seamlessly generate phishing scams without spelling, grammatical, and verb tense mistakes makes it easier to dupe people into believing the legitimacy of the communication.

A 2023 report by Perception Point found that advanced phishing attacks grew by 356% in 2022. The report noted that “malicious actors continue to gain widespread access to new tools and advances in AI and Machine Learning (ML) which simplify and automate the process of generating attacks.”

Exposing PII data

The growing popularity of generative AI is also raising data privacy concerns. Enterprises must be careful about what information they feed into generative AI tools to avoid exposing sensitive or personally identifiable information. Generative AI tools share user information with third parties and use this information to train data models. It means the technology has the potential to violate privacy laws.

According to Reuters, the US Federal Trade Commission (FTC) launched an investigation into ChatGPT’s creator, OpenAI. It focuses on handling personal data, its potential to give users inaccurate information, and its “risks of harm to consumers, including reputational harm.”

Intellectual property infringement

Information entered into a generative AI tool may become part of its training set. It can put users of the tool at risk of intellectual property (IP) infringement. Gartner highlights that tools like ChatGPT, which are trained on a large amount of internet data, likely include copyrighted material. The analyst firm warned that its outputs could violate copyright or IP protections.

There is no question that generative AI holds much promise for enterprises. But to reap these benefits safely and securely, organisations must take steps to minimise the risks.

Mitigating the risk of generative AI in the enterprise

IT leaders must examine this technology to understand how accurate and useful generative AI is to their enterprise. A lack of transparency about what is happening on the back end of this new technology can make it difficult to determine if it is really useful for the organisation and establish its best use cases.

When using any external tool, reviewing each solution provider’s terms of service, data protection, and security policies is essential. It is also important to conduct due diligence. It will determine whether the tool uses encryption and whether data is anonymised. Just as important is whether the tool complies with regulations such as the EU’s GDPR (General Data Protection Regulation), the CCPA (California Consumer Privacy Act) and numerous other privacy regulations.

Organisations should develop and implement policies governing the use of AI in the workplace. It should spell out which tools employees can use and what information employees can feed into them.

Enterprises should also equip their IT teams with tools to identify what an AI like ChatGPT generates versus what is human-generated. It is especially important as this relates to incoming “cold” emails.

To further mitigate organisational risk, enterprises should make it a priority to routinely train and re-train employees on the latest cybersecurity threats associated with generative AI. There should be a specific emphasis on AI-generated phishing scams. This training should also include cyber risk prevention measures and guidance on appropriate uses of AI in the workplace.

Using AI for competitive advantage

AI offers exciting opportunities for organisations. However, as with any new technology, there are uncertainties and risks. By understanding these risks and taking steps to mitigate them, enterprises can more safely and securely deploy this technology to gain a competitive edge.


NetSfereNetSfere is a secure enterprise messaging service and platform from Infinite Convergence Solutions, Inc. NetSfere provides industry-leading security and message delivery capabilities, including global cloud-based service availability, device-to-device encryption, location-based features, and administrative controls. The service leverages Infinite Convergence’s experience in delivering mobility solutions to tier 1 mobile operators globally and technology that supports more than 500 million subscribers and over a trillion messages annually.

Previous articlePrecisely targets AWS customers with new accolades
Next articleSimSpace secures $45 million to fuel growth
Anurag Lal
Anurag Lal is the President & CEO of Infinite Convergence Solutions. With more than 25 years of leadership and operating experience in technology, mobile, SaaS, cloud and telecom services, Anurag leads a talented team of innovators who are transforming everyday messaging technology into secure, highly scalable communication platforms that can be leveraged across a variety of markets and segments. Prior to Infinite Convergence, Anurag served as Senior Vice President at Meru Networks (NASDAQ:MERU) and Chief Business Development and Sales Officer at iPass Inc. (NASDAQ:IPAS). During his tenure at these organizations, Anurag played an instrumental role in the successful, multi-billion dollar IPOs for both iPass and Meru Networks. Anurag’s background also includes tenures as Vice President of Internet and Multimedia Services for British Telecom (Worldwide) and senior management roles at Sprint International. Appointed by the Obama administration, Anurag previously served as a Director of the U.S. National Broadband Task Force (part of the Federal Communications Commission). In his role on the task force, Anurag helped develop a deeper understanding of global broadband policies, regulations and best practices. He was also a core contributor to the first-ever national broadband plan.

LEAVE A REPLY

Please enter your comment!
Please enter your name here