In a move that has been a long time coming, Google has announced it is to close the consumer version of Google+. It is ostensibly blaming it on falling consumer usage of the platform. However, numbers have been low for years after Google failed to make the platform a realistic challenger to Facebook.
The more likely reason for closing it is that Google has just had to admit to concealing a major security breach. The breach was caused by a poorly written, poorly designed and poorly maintained API. The breach started in March 2015 and was finally closed in March 2018 when Google discovered it as part of a company wide code review. Despite identifying the breach, Google chose to hide the issue from the 500,000 users it says were affected.
The company maintains that it can find no evidence that anyone took advantage of the vulnerability in its APIs. However, 438 apps were able to exploit it had they chosen to. Given the problems Google has had with malicious Android apps, it seems to be hoping for the best.
Regulators may not be so laid back. Google is already fighting on several fronts. The EU has fined it $4bn. The US government is grilling it on email security especially after it was revealed that third-parties were reading users emails without their knowledge. This latest issue, therefore, is just going to add to the troubles and woes that the company is facing.
Google announces new security tools
The irony of all of this is how Google chose to spin the situation. It made the announcement as part of a blog from Ben Smith, Google Fellow and Vice President of Engineering. The blog is titled Project Strobe: Protecting your data, improving our third-party APIs, and sunsetting consumer Google+.
In his blog, Smith lauds the company’s efforts to continually strengthen its control and policies around data privacy and security. He says: “Project Strobe—a root-and-branch review of third-party developer access to Google account and Android device data and of our philosophy around apps’ data access.
“This project looked at the operation of our privacy controls, platforms where users were not engaging with our APIs because of concerns around data privacy, areas where developers may have been granted overly broad access, and other areas in which our policies should be tightened. “
Smith went on to give more details about the breach, what could be accessed and what Google was doing about it. It finished by talking about what Google is doing to limit what apps can access.
What is this all about?
In March 2015, Google issued an update for Google+. It gave developers an opportunity to ask users for access to their profile information. It is now known that it gave developers much more data. It allowed developers to obtain the non-public profile data for the user. In addition, and this is where it all went wrong, developers were also able to access ‘friends’ non-public profile data.
According to Smith the affected Google+ People API meant:
- Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.
- The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public.
- This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data you may have posted or connected to Google+ or any other service, like Google+ posts, messages, Google account data, phone numbers or G Suite content.
Smith continued to talk about the patching of the bug and how it happened. Interestingly, he also says: “We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.”
Despite Smith’s claim that there is no evidence the bug was ever used, the fact that they cannot say who was impacted also means that they cannot say, categorically, that it was not used. All of this data could be used by a cybercriminal to confirm details on an individual and to flesh out hacking profiles. It increases the risk of a user falling to a scam or phishing email. It also raises questions over Google’s claim that it was never accessed.
At the heart of the company’s denials appears to be its worry that this could be its Facebook and Cambridge Analytica moment. That information was revealed to staff in an internal memo that both the the Wall Street Journal (pay wall) and Tech Crunch have cited.
Is regulation the problem?
Unsurprisingly, within hours of the revelation of this issue, security vendors were getting their comments out the door. Most castigated Google for failing to inform users and covering this up.
An alternative voice came from Etienne Greeff, CTO and co-founder, SecureData. “The news today that Google covered up a significant data breach, affecting up to 500,000 Google+ users, is unfortunately unsurprising. It’s a textbook example of the unintended consequences of regulation – in forcing companies to comply with tough new security rules, businesses hide breaches and hacks out of fear of being the one company caught in the spotlight.
“Google didn’t come clean on the compromise, because they were worried about regulatory consequences. While the tech giant went beyond its “legal requirement in determining whether to provide notice,” it appears that regulation like GDPR is not enough of a deterrent for companies to take the safety of customer data seriously. And so this type of event keeps on happening. While Google has since laid out what it intends to do about the breach in support of affected users, this doesn’t negate the fact that the breach – which happened in March – was ultimately covered up.”
Greeff continued: “..we will undoubtedly see even more of this ‘head-in-the-sand’ practice in the future, especially given GDPR is now in force from larger tech firms. It ultimately gives hackers another way of monetising compromises – just like we saw in the case of Uber. This is dangerous practice, and changes need to be made across the technology industry to make it a safer place for all. Currently, business seems to care far more about covering its own back than the compromise of customer data. It’s a fine line to walk.”
What does this mean?
Regulators will want to know more. They will be particularly concerned about the claim Google couldn’t identify affected users. In addition, the claim that the bug was never used will also need to be tested and proven. It comes at a bad time for the company. Regulators and governments are painting a target on its back. This is not just about security but abusing its position in the market, refusing to testify in front of the US Senate and the low level of tax it pays.
To now be lumped in with Facebook and Cambridge Analytica has to be the company’s worst nightmare. How it deals with this will be interesting. Closing the consumer version of Google+ is something the market has expected for well over a year or more. The move, however, won’t make this go away. It is highly likely that it is going to have to pay the price for potentially losing customer data, even if it cannot identify those customers.
There is a bigger issue here. Organisations and ISVs are becoming heavily dependent upon APIs. They are used to integrate software and systems and are the foundation of what is termed digital transformation. Security experts have long warned about the risk of rushing out APIs and the need for better security controls.
If anything good is to come out of this Google+ debacle, it should be that APIs are as much a curse as a blessing. If you don’t deal with them properly, they will bite you. The question for Google is will this be a love bite or a something far more painful.