AI is a tool that offers many benefits to organisations. But, like all tools, it can and will be misused. Andrew Patel, a researcher at F-Secure, has worked on several reports looking at some of the threats AI can bring. Enterprise Times talked with him as to what he has found.
One of the reports Patel has been involved with is creatively malicious, prompt engineering. It paints a sobering picture of how AI can be used to attack individuals, destroy reputations and enhance cyberattacks. We asked Patel when we would start to see AI used in phishing attacks.
Patel replied, “the fact that phishing worked, as it did when it was handwritten, means that there isn’t an impetus to start using AI to generate these things.” However, he went on to say, “If they start using AI, it certainly won’t be badly written anymore. In fact, we may get to the point where well-written is the new suspicious.
“If we wanted to use AI to create a phishing email, we’d do it because that same prompt, when you run it over and over, generates a different email. It’s of the same spirit, but it’s worded differently.”
When we talked about spear phishing and the risk, Patel saw it as being more effective. Feeding the AI the social media of an individual would allow phishing emails to effectively impersonate someone making a much more convincing attack.
For businesses, reputational damage from AI is a serious risk. Patel points out that AI can not only be used to create a damaging article but can also churn out multiple variations quickly.
To hear what else Patel had to say, listen to the podcast
Where can I get it?
You can listen to the podcast by clicking on the player below. Alternatively, click on any of the podcast services below and go to the Enterprise Times podcast page.