Big Brother Watch Monitor Image by Micha from PixabayCultureAI has launched a new risk solution that monitors employee usage of generative AI solutions and flags for potential privacy, compliance or corporate intellectual property breaches. The solution helps monitor employees’ usage of generative AI to protect against privacy and data security breaches.

The generative AI tool is not designed to prevent usage. Instead, it is there to educate and alert. It will be interesting whether, in some cases, customers ask CultureAI whether it is possible to stop the usage of certain words on generative AI or even other platforms. Culture AI believes the new solution helps organisations to ensure employees are empowered to innovate. But are also immediately made aware of any compliance breaches.

James Moore, Founder and CEO at CultureAI, image credit: LinkedIn
James Moore, Founder and CEO at CultureAI

James Moore, Founder and CEO at CultureAI, commented, “GenAI tools like ChatGPT and Bard are extremely popular and offer significant growth opportunities for companies, however, unchecked usage poses significant risks for organisations. Without visibility of how employees are using AI tools, organisations cannot implement the real-time coaching required to help employees harness the power of these tools safely and effectively.” 

How does this work?

The new feature is enabled within the Human Risks Dashboard within the CultureAI dashboard. The first risk option is “Posting sensitive data into generative AI Apps”. When usage risks are highlighted in this dashboard, the system reveals the application used, what text was a potential breach, who posted the information into the public domain as well as where and when they posted it.

CultureAI supports the monitoring of several different data sources, including Chrome, Edge, Teams, Slack, Edge, Google Drive and others. Once generative AI monitoring is turned on, the platform will detect several standard patterns such as tax codes, NI numbers, Passports, dates of birth and several other forms of PII. The administrator can then create new phrases or patterns to detect. For example they may wish to check for specific project names, or even a word such as “Salary”. When setting these alerts the administrator can set the priority level. They can also obfuscate what term is being looked for by changing the name of the alert.

Education and alerting the internal teams

Once the alert is set, the administrator sets up an action that will happen when an employee enters such as a term in a generative AI platform. They can assign training, assign a policy or send the employee a notification message. Educating them that they should not use the term. The alerts can be sent by email or the messaging platform they use day to day, which CultureAI sees as the most effective channel. The message can be personalised around the individual and around the action they took. At the same time, the alert can be forked and sent to other internal security or HR teams.

When the user then uses the flagged term in a prompt to generative AI, CultureAI will trigger the actions, even if ChatGPT et al. recognises the privacy concerns around the prompt and do not respond. The response is immediate and should help educate employees about the potential breach.

Benefits

There are several benefits that CultureAI believes this innovative solution provides. They include:

  • Real-time employee education: The alerts are targeted and delivered in near real-time to the channel that employees use on a daily basis. It provides an immediate reminder of the word usage and makes it less likely that they will repeat the breach.  Regardless of whether the breach is serious, there is also an audit trail that security teams and HR can check and follow up on if further training is required.
  • Risk reduction: By introducing an immediate response and identifying training requirements within the organisation for any employee using generative AI poorly. Coaching and training become targeted and more effective. They reduce the likelihood of breaches in general, presumably, not just in general AI usage but elsewhere as well.
  • Comprehensive reporting: The collected data is comprehensive, and the system can produce reports to identify behavioural trends amongst employees. It could highlight a wider cultural issue within a department or the need for further education.
  • Compliance: The solution aids in upholding compliance with data protection regulations and standards in the workplace.
  • Monitoring Generative AI usage: Generative AI is a game changer for many organisations. However, there are huge concerns around its usage. This tool enables business leaders some surety that their employees can innovate. But there is also a level of oversight on that usage.

The solution checks the usage of general GenAI applications such as ChatGPT, Bard, and Bing, through Microsoft Edge and/or Google Chrome extensions. There are, however, far more generative AI application creation tools freely available. It will be interesting to see how Culture extends its product. CultureAI is hosting a live demo in a webinar (Registration required), which takes place on the 7th of February 2024 at 15:00 (GMT).

Enterprise Times: What does this mean

This seems a very useful tool that will almost certainly mitigate the risk for many organisations. It will be interesting to see how organisations use it and whether the administration outweighs the benefits. There is also a risk that employees feel watched.

Research by ExpressVPN showed that those who know they’re being watched report feeling more anxiety and pressure. While this applies to VPN usage, the overuse of this solution may create a similar issue. The crucial element for this is that using generative AI is unlikely to be the primary element of their work. However, administrators will need to be careful of what they monitor, and employee terms and conditions may need reviewing.

LEAVE A REPLY

Please enter your comment!
Please enter your name here