Trend Micro looks at the issue of Fake News

The rise of Fake News to dismiss something that presidents, politicians and companies disagree with has been interesting. The reality is that fake news has been around for a long time. Governments have used it for propaganda, the military has used it to deflect the enemy and companies have used it to drive false advertising campaigns. Trend Micro has just released a report looking at the problem of fake news.

In the last few years, however, we have seen the rise of false news being used my cybercriminals. Every time there is a major incident, cybercriminals are quick to capitalise. It might be a page to mourn a celebrity or to see naked image. It could be a page to donate to charities after disasters such as the Manchester bombing or the London attacks. These are general purpose boilerplate campaigns that they reskin to take advantage of public sentiment.

They also build very large social media presences to drive content using clickbait headlines. An example of this is “I couldn’t believe this father did this” or “she went to the prom in this dress but you’ll never guess what happened next.” These are designed to appeal to specific demographics and get them to click on pages where drive-by malware is operating.

How do you define fake news?

As social media companies and Internet sites struggle to contain the spread of fake news they need a definition they can use to assess an article. In its report, Trend Micro put forward the following definition:

“Fake news is the promotion and propagation of news articles via social media. These articles are promoted in such a way that they appear to be spread by other users, as opposed to being paid-for advertising. The news stories distributed are designed to influence or manipulate users’ opinions on a certain topic towards certain objectives.”

The problem with this statement is that it is overbroad. It suggests that all news on social media is fake. After talking with Trend Micro they accept that the definition needs to be narrowed down.

Why has fake news become such an issue?

Money. It’s a simple answer and one that shows just how easy it is to get fake news out there. Paying people to post fake news, make up stories or promote goods without admitting it was really advertising is big business. The US and EU have cracked down on bloggers over the past few years requiring them to acknowledge where they are paid to promote goods. However, the current money route is far more obscure.

For some writers it is all about the advertising revenue from clickbait headlines. They sign up for adverts from Google and other sources, put them on their site and then drive traffic to the site with fake news and salacious headlines. Nothing new there. It’s a route that has been used by publications since pay per click arrived. The difference is revenue. Some of these writers admit to making thousands per month.

There is also the money from governments who seek to influence elections. The US is still reeling from its last election where there were claims of people being paid by Russia to post fake news. It is not just foreign governments who stand accused. Supporters of candidates in elections are also accused of these tactics.

The US, France, UK and Germany have all been dealing with this issue over the last year. It is now commonplace for any negative political story to be dismissed as fake news. This makes it very hard for any voter to understand what it true or false.

A ready market for tools to create fake news

Trend Micro also identifies the growth in tools to help create and spread fake news. This is unsurprising given the amount of money now involved. Most of these tools and services are sold through the Dark Web. However, the researchers also found them readily available from commercial companies.

There is also a significant link through the tools to state-sponsored campaigns. At present the focus is on political campaigns rather than commercial interests. However, there is nothing to stop a commercial company using the same tools to attack its competitors. The risks here are higher as being caught risks a business threatening backlash.

As with other cybercriminal activities, those involved in fake news are using all the right technologies. Cloud computing and analytics are high on their agenda. Trend Micro researchers highlight that some sites even offer “public opinion monitoring systems”. This type of activity helps focus fake news campaigns to create a rolling narrative rather than be one-off attacks.

It is also likely that some are beginning to use cloud-based machine learning. This is already an area cybercriminals are exploiting when building cybersecurity attacks. Stolen data and past attack success/failures are fed to the machine learning systems to understand how to improve them. It has certainly improved the effectiveness of some phishing attacks.

Using fake news to create celebrities or destroy careers

To turn someone into a social media celebrity with over 300,000 followers is surprisingly cheap. Researchers discovered that for around $2,600 someone in China can turn themselves into a celebrity. While the initial followers are paid for fakes, such accounts quickly gather pace as comments published on services such as Twitter and Weibo quickly move up the most viewed list. This brings in real followers and a star is born.

As with anything that creates it can also be used to tear down. One case study in the report shows that there are active campaigns to destroy the reputations of journalists for $55,000. These campaigns are run over a period of time and aim to defame the journalist. Multiple stories are spread on social media over a period of four-weeks. The net effect is that the “no smoke without fire” view starts to take hold. This means that real people retweet or talk about the stories forming the opinion that they are real not fake.

The story behind Pizzagate is an example of how this fake news has real consequences. During the US election there were claims a Washington pizza restaurant was involved in paedophilia. It was fake news. There was no truth in the rumours that were spread online and aimed at Hilary Clinton’s campaign. This impact of the fake news was that mainstream news sources in other countries ran the story as real. It even led to one individual taking shots at the restaurant.

Russia and China are major players

Unsurprisingly the report calls out the involvement of state actors in both Russia and China. It also takes a close look at the way the criminal underclass is making money from fake news. This ranges from offering promotional services to get wider coverage of stories. The use of crowdsourcing in Russia to promote content is of particular concern. It focuses on one service VTope saying:

“VTope—a multiparty, online collaborative system with a throng of over 2,000,000 mostly real users and support for platforms such as VKontakte (VK), Ok.com, YouTube, Twitter, Ask.fm, Facebook, and Instagram. Its workflow comprises implementing tasks (liking or following a profile or a post, joining a group, etc.) that incentivizes users with points, which they can resell or use for self-promotion.

VTope’s service is initially free of charge, and participants can earn points by completing tasks. Points can also be purchased as coupons that can be bought on-site, but they are also widely available in underground marketplaces where they’re often cheaper than on VTope. For instance, a coupon worth 10,000 points is sold for RUB 1,190 ($21) on VTope, and RUB 500 ($8) in the underground. A coupon worth 50,000 points costs RUB 3,490 ($62).”

It is just one of several services like this that operate in Russia.

A Middle East underground is emerging

It should not come as any surprise that Trend Micro found an emerging fake news market in the Middle East. It is a region where unpicking the truth has become as difficult as in some parts of Africa. A lot of the services seem to be related to traditional cybersecurity attacks such as embedding attacks in content forbidden by governments.

There are other services that help promote social media profiles and are aimed especially at a younger market.

Can social media sites stop fake news?

The impact of social media sites and how they can deal with fake news has been a big topic for a while. The problem, as evidenced by the Trend Micro description above, is a viable definition of fake news. For social media sites the problem is more than just spotting stories, it is doing so quickly enough and then closing down the sources that spread them.

Twitter, Facebook, LinkedIn, Google, Weibo, WeChat and other social media sites have long had a problem with fake accounts. Some accounts reasonably use pseudonyms to protect the user from abuse. Others are openly used to spread fake content or are used for hate campaigns. There is now increasingly concerted pressure from governments to tighten up algorithms and do more to detect the problem.

The report stops short of using some of the technologies used to detect cyberattacks to detect fake news. It should be possible to detect a fake headline or story and then use the same alerting mechanism as is used for spam. This could be done via an internal sharing system between the social media sites allowing them to pool intelligence.

Readers must take their own responsibility

What makes fake news more effective is the willingness of people to just share it without doing any checking. One example of how hard it is to stop fake news is the way people share stories. On Facebook, for example, sharing a story makes it easy for it to be taken down. Remove the source of the story and all the links can be removed. As a result, people are asked to copy not share fake news. This means that every instance has to be identified. Stopping readers doing this is difficult, especially as many believe the fake news and feel that they have to do everything to help spread what they think is a story big business, governments and the media are trying to hide.

Social media sites need to have a page where people can check what they see to ensure it is not fake news. Trend Micro believes readers should learn to identify clickbait headlines and identify suspicious web domains. Such suggestions are meaningless given the way people use social media and the Internet. Ironically, the latter is also a major way that spam and malware is spread so maybe this says more about the failure of security software than the user.

Sites such as Snopes are a good independent place to start. The industry, both social media and the traditional news media should cooperate more. They could fund alternative sites to debunk myths and fund researchers to validate stories. That would still require users to look at it but it would go some way to helping solve the problem.

What does this mean?

Unfortunately it means that fake news is not going away any time soon. It has already grown into a major criminal enterprise and that means it is here to stay. What is needed, and the report avoids this issue, is security vendors to start treating fake news as security issue.

For enterprises and individuals this is an important step. At that point, it will be possible to reduce the amount of fake news circulating in the office. As fake news is also being used to spread cyberattacks there is a real benefit to this.

It is also important that governments themselves crack down on fake news. The last few general elections around the world have seen political parties benefit from fake news. Some of this was spread by their own supporters. They need to raise the bar themselves or risk having their own credibility as politicians further damaged.

LEAVE A REPLY

Please enter your comment!
Please enter your name here