Should AI be more regulated? - Photo by julien Tromeur on UnsplashFrom a security and data protection perspective, there is a lot of scary stuff coming out of the Artificial Intelligence (AI) field. There are deepfakes, voice cloning and copyright infringement to amplification of disinformation campaigns, bot infestation of social media, AI-assisted plagiarism, advanced and targeted phishing/vishing/smishing to killer robots and superhuman AI apocalypse. It’s no wonder AI doomers are in full swing. Eliezer Yudkowsky, one of the most respected names in Artificial General Intelligence (AGI), has called to “shut it all down” globally and indefinitely.

Yudkowsky’s op-ed in Time lists reasonable steps to restrict large-scale AI deployments to prevent accidental or malicious runaways with catastrophic consequences. Yet, his warning is mostly misinterpreted as a call to shut down all AI development efforts. This interpretation is understandable, considering the dramatic narrative and specific use of “all” in “shut it all down.”

The first point to make here is that you can’t stop the development of new technologies. From alchemists of old to genetic engineers of modern times, history shows that what needs developing will be developed, be it in the name of science, national security or corporate gain.

How should AI regulation be approached?

AI is a technology, a tool that can be applied for the good of humanity or abused with disastrous consequences. Regulation of its applications is a task for world governments, and the urgency is much higher than the need to censor the next version of ChatGPT. The EU AI Act is a well-put, concise foundation of a legislative framework aimed at preventing misuse without stifling innovation.

In the US, the Biden Administration is taking a proactive stance in regulating AI technologies. The new executive order includes measures ensuring AI systems are safe, secure and trustworthy before companies make them public. The US AI Bill of Rights is much less specific and seems to focus more on political correctness than on technological safeguards.

Chinese regulation is very specific, at least for generative AI, but heavily politicised. As far as international regulation, it will likely go the way of the nuclear non-proliferation treaty. As soon as several leading nations achieve a dangerous enough level of AI advancement, they will save the rest of the world from having access to it.

What other measures can mitigate the risk of an AI-driven extinction event?

The answer is the same set of controls employed in other fields that have a potential for weaponization. Namely, these are:

Transparency of research. Open-source AI development drives innovation and democratises access to a powerful enabling technology. It also has many safety benefits, from spotting security flaws and dangerous lines of development to creating defences against potential abuses of the technology. So far, big tech supports open-source efforts, if somewhat reluctantly. But that might quickly change as open-source software bites into their bottom line, and competition intensifies. There may be a need for legislative measures to retain open-source community access to large data models and datasets mostly monopolised by the tech giants. They were created from public data, after all.

Containment of experimentation. All experiments with sufficiently advanced AI need to be sandboxed with physical resource starvation outside of the containment area. Safety and security procedures must be strictly enforced. Whilst these aren’t foolproof measures, they might make the difference between local disturbance and global catastrophe.

Kill switches. Like antidotes and vaccines, countermeasures against runaway or destructive AI variants need to be an integral part of the development process. Even some ransomware creators build in a kill switch.

Personally, I do not believe that AI can achieve singularity on current technological foundation, no matter how many GPUs, megawatts, or smart brains are thrown at it. Before AI can acquire consciousness to decide what’s good for it and what’s not, as Yudkowsky warns, it needs a critical component: Artificial Intuition. The ability to instantaneously connect logically distant data points, to develop a subconscious. Advanced AI system can only imitate some of it, but it’s too difficult and expensive on the existing binary platforms. Quantum computing has a potential to change all of this, and that’s when the real fun with AI will begin.

What about the ethical considerations?

We’re currently hearing too much talk about “bad AI,” and specifically Large Language Models’ (LLMs’) wrongdoing – whether that’s exposing sensitive information, spreading disinformation, spewing hate speech, or manifesting any other biases. Spot-patching individual prompt exploits to minimise abuse is a strategy that has limited effect. Like blacklisting malware, there will always be a new culprit. Contrary to what LLM evangelists want you to believe, AI doesn’t really create new content. It simply mashes together existing content.

To enforce ethical behaviour, AI models should be trained on ethical data, not on the wholesale collection of content on the web. As any data scientist knows, producing a well-balanced, unbiased, clean dataset to train the model is a difficult, tedious, and unglamorous task.

Until AI companies – and the VCs that back them – accept this approach as the only way to deliver respectable content, we have only a simple reflection of the information behind the LLM. This information will be biased if the foundational information is biased.

Censoring the output of such a prolific verbal generator is extremely inefficient and introduces another level of bias. Only diligent work on creating reliable datasets can make the resulting product trustworthy.

Eventually, I think that general-purpose, “all-knowing” LLMs will be ousted in favour of specialised, niche LLMs trained on data curated by subject matter experts in the field.


SemperisFor security teams charged with defending hybrid and multi-cloud environments, Semperis ensures the integrity and availability of critical enterprise directory services at every step in the cyber kill chain and cuts recovery time by 90%. Purpose-built for securing hybrid Active Directory environments, Semperis’ patented technology protects over 50 million identities from cyberattacks, data breaches and operational errors. The world’s leading organisations trust Semperis to spot directory vulnerabilities, intercept cyberattacks in progress and quickly recover from ransomware and other data integrity emergencies. Semperis is headquartered in Hoboken, New Jersey, and operates internationally, with its research and development team distributed throughout the United States, Canada and Israel.

LEAVE A REPLY

Please enter your comment!
Please enter your name here