Clearview AI now gets Apple ban (Image Credit: Gerd Altmann from Pixabay )Last week, Clearview AI suffered a breach. Since then, the company is finding it hard to do any reputational damage control at all. It has chosen to limit its contact with media by not responding to emails. It has also chosen not to publish a statement on its website or issue a formal press release.

The company scrapes images from the Internet, primarily social media and search engines such as Google Images, and adds faces to its facial recognition database. It is already the subject of cease and desist letters from Twitter, Facebook and Google for misusing their services.

Now Apple has decided to add to that list of bans. It accuses Clearview AI of violating the terms of its Developer Enterprise Program. Using the program allows companies to build and distribute apps to their own staff. It does not allow a company to distribute those programs to customers or outside sources. Clearview AI has 14 days to respond or face the deletion of its account.

At the heart of the Apple ban is how Clearview AI allows law enforcement and others to access its database. The company claims that customers can only do so with a valid user account. The issue for Apple is that they are not employees of Clearview AI but are its customers. Therefore, the company needs to change what it is doing.

Coming on top of the data breach, this piles pressure on the CEO and founder, Hoan Ton-That.

What was the Clearview AI breach

While there were concerns about Clearview AI previously, it has all taken on a different perspective since the breach last week. In that breach, attackers were able to get a complete copy of the company’s customer database. It also contains information on customer accounts and the number of searches each customer has made. (source: Daily Beast)

It is important to note that the only evidence available is about the customer list. This is despite claims from some news sources that the image database had also been stolen.

The response to the breach from the company has been limited. While it told customers data had been lost it has provided no public statement on its website about the breach. It has also spoken to a very limited number of news sources and issued its main statement through its lawyer, Tor Eklund.

That statement is: “Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.

Brian Roy (Image Credit: LinkedIn)
Brian Roy

According to cybersecurity expert Brian Roy: “The problem I have with Clearview AI this time is the lackadaisical mentality they [have] about being breached.” He goes on to address the statement by Eklund saying: “This was made by the firms attorney. It is obvious Clearview AI is another example of a firm that does not take cyber security and data privacy seriously.”

What does Clearview AI do?

Clearview AI scrapes the Internet to gather facial images that it then stores in its database. The company maintains that the images are in the public domain as is the data it stores with the images. It believes that this gives it the right to scrape and store the data which it then sells access to.

With no public access to the data, it means that users have no idea if their details are stored. The accuracy of the data cannot be verified and users cannot have their details redacted. No consent also raises questions over the use of the app in Europe, Singapore, Australia and other countries that have stringent privacy laws.

The company claims that the images are:only for law enforcement agencies and select security professionals to use as an investigative tool.” However, an article on BuzzFeed News shows that the company also has a much wider user base. Using a copy of the Clearview AI customer base, journalists identified schools, universities, supermarkets, finance organisations, mobile carriers and more as users. Most of these seem to be using the 30-day trial app although it seems some are extremely active users of the app (read the article for more details).

Enterprise Times: What does this mean

There are a number of issues with this data breach. The company’s approach to being breached was more of a shrug and ‘it happens’ mentality. As Roy pointed out it shows the company “does not take cyber security and data privacy seriously.”

The claims that only law enforcement and security investigators can access data is also questionable. The number of educational, retail and other customers using the app shows that its use is far wider than the company admits. While there is evidence from the BuzzFeed News story that many are using trial accounts, you’d expect there to be oversight even at that level.

Matching images is also an imprecise science. There is little information on what quality or verification Clearview AI does around image searches and mapping. Current research on facial recognition shows that bias is evident and misidentification rates are high. Someone misidentified through the app and database would have no recourse to sue the company for the impact of that mistake.

The ban by Apple shows that some tech companies other than social media firms are concerned about how Clearview AI behaves. It means that it will have to reconsider how it provides access to Apple devices. However, the company has been linked with a number of other tech companies who want to work with it. The result could simply be that the company drops support for Apple devices.

Perhaps the biggest concern is the growing impact on privacy of such applications. Ton-That claims that the US First Amendment rights allows the company to collect and sell access to the data it harvests. Outside of the US such rights are not valid. As the company increases its sales in Europe, will it find itself under pressure from the EU to be more open about the data?

LEAVE A REPLY

Please enter your comment!
Please enter your name here