DeepFakes are a technique that alters an image by replacing the face of one individual onto the head and body of another. The phrase was originally coined in 2017 when it was used to create fake pornography videos. Since then the technology has evolved to encompass both audio and video and been used in several cybersecurity campaigns. It has also been used to create fake campaign adverts in the recent elections in India.
But is this the start of a major problem? Are we going to see DeepFakes used during the upcoming US 2020 election? Is the technology maturing to the point of DeepFakes as a Service? After all, other forms of malware have already been turned into services.
To answer some of these questions, researchers at Nisos decided to take a closer look at DeepFakes. Rob Volkert, VP of Information Operations at Nisos wrote: “Nisos undertook research into deep fake technology (superimposing video footage of a face onto a source head and body) to determine if we could find the existence of a deep fake illicit digital underground economy or actors offering these services. Our research for this white paper focused specifically on the commoditization of deep fakes: whether deep fake videos or technology is being sold for illicit purposes on the surface, deep, or dark web.”
What are DeepFakes?
The most used form of DeepFakes is taking a face and imposing it on the head and body of someone else. If done properly, it makes it very hard to tell that the video is a fake. However, to be effective, it requires a lot of images from multiple angles of the face to be used. The most common usage is taking a well known celebrity, where there are lots of images, and feeding them to a DeepFakes engine. It extracts all the markers required to map the face to the target head.
The need for so many images is one of the reasons why recent attacks against facial recognition databases is of such concern. They contain high quality images with markers built-in to make the images highly usable. Having access to those images will speed up the creation of realistic DeepFakes.
With the exception of the fake ad in the Indian election, the most common usage is for pornography. However, there is a growing concern from multiple security vendors that DeepFakes could be used in disinformation campaigns.
One US company, Preswerx, wants to use the technology to create training videos. It believes that it can speed up the creation of training videos by having multiple people record them and overlay a well-known and trusted face on top. A good use? Maybe.
What did the researchers at NISOS discover?
According to Volkert: “Our research found that indeed deep fakes are being commoditized but primarily on the surface web and in the open. Most deep fake commoditization appears to be focused on research and basic cloud automation for satire or parody purposes.”
He continued: “While we expected to find a community, we did not find evidence of a marketplace (selling as a service) for e-crime or disinformation purposes. We assess the lack of an underground economy for uses other than satire, parody, or pornography is due to the resource and technological barrier to entry, lack of convincing quality in the videos. In short, the dark market is not yet lucrative enough.”
The findings are interesting. They match the experience and problems faced by Preswerx. When it advertised for those with experience of DeepFakes it had just two responses. Unfortunately, neither candidate was able to evidence their experience of creating DeepFakes.
Taken together, the two sets of findings show an immature market. However, once there is evidence that there is significant money to be made, that may change. Rather than ransomware, what if cybercriminals sent the CEO of a listed company a DeepFake video showing him trashing the company and demanding a payoff not to release the video? It’s a nightmare scenario. Even if the company warned about it, the impact on share prices could be substantial.
Where is the DeepFakes Economy?
Underground markets exist when there is demand. What the research appears to show is that there is insufficient demand to create a viable marketplace. That, however, is no reason to be blasé. The researchers see the technology as in the second of two phases which includes the emergence of an underground economy.
What is interesting is that Nisos identified easy to find sellers offering to create deep fake videos for a fee. The process of creation is that the user uploads a video and a photo. The sellers then map that face to the video. What is not clear is the level of quality from a single image.
Nisos also says that the whole process is cloud-based and that the sellers are taking advantage of open source repositories. Many of those repositories are located on GitHub which makes them easy to find. Using the processing power of the cloud makes sense. The challenge for any cloud provider will be in identifying users who are creating DeepFakes.
An underground community by the end of 2020
Volkert writes: “The next phase is likely to see the illicit community go largely underground and deep fakes sold as a service for e-crime and nation-state level activities.” That phase is likely to start as early as late 2020. The research paper calls out five thing Nisos expects to see:
- Technology Refined: No longer just celebrities / high-profile figures; synthetic audio mimicking technology spreads
- Seller Community Expands: Popularity grows and sellers emerge on the deep and dark webs; smartphone applications
- Illicit Profits and Uses Grow: E-Crime (fraud, blackmail, social engineering); Disinformation (mixed with real videos for confusion, cast additional doubt on truth / fact); Pornography
- Methodology: Refined synthetic audio; Better pairing with video; Body movement authenticity (not just face mimicking)
- Actors: Criminals; Cyber Community; Nation-State Actors
Can the social media platforms play a role in stopping DeepFakes?
What will concern many is the role of nation-state actors. Interference in elections is becoming commonplace. This has led to a concerted effort to identify fake news. All of the major social media platforms are trying to identify and remove it from their platforms. They have also all taken a stance to ban DeepFakes.
However, there are two issues here. The first is that the platforms have a poor record at identification of banned content. If the quality improves, how will the platforms decide what is real and what is not? The second issue is one of timing. Even when advised of objectional, illegal or fake content, none of the major platforms are good at acting quickly. That has to change.
The issue is not lost on the reports authors. Volkert writes: “As deep fakes become easier and quicker to create, social media platforms (currently the primary vehicle for deep fake dissemination) and users will simply not be able to keep up with the volume of synthetic manipulated videos or images. Instead, this will require a partnership between on-platform (social and traditional media), off-platform (cyber security and watch dog groups), and public sector entities (governments and private firms).”
Enterprise Times: What does this mean?
This is a well written and though provoking report. It raises almost as many questions as it answers. What it shows it that there is a very narrow window in which to create a response to the problem. Will a response be found? It is unlikely but not impossible. The report identifies three areas where the partnership above can have maximum impact.
While elections will grab the headlines and fake porn will grab the moral outrage, it is the potential impact on businesses that is of more concern. It is not unknown for corporations to commission guerrilla marketing campaigns that trash a competitor. With DeepFakes that takes on a new level of damage. Criminals are now businesses themselves. We’ve seen major crime families and even terror groups move into stocks and shares to hide the movement of money. The ability to manipulate the stock market to make money will be too tempting for some.
DeepFakes might not be mainstream yet but they are coming. This report provides a good grounding of the current state of the threat.