Analysis Facebook has cut off independent reviewers of political ads that run on its platform, citing security concerns. That’s a claim the reviewers have rejected.
The social media giant has added code to its system that prevents third parties from automatically grabbing information on ads that appear on its platform, while also specifically gunning for ad transparency tools. Non-profit ProPublica has been keeping an archive of ads, collected via its no-defunct plugin, showing why users have been targeted for a Facebook ad.
Facebook’s ad product director Rob Leathern claimed on Twitter that the change “isn’t about stopping publications from holding us accountable or making ads less transparent [but] about preventing people’s data from being misused – our top priority. Plugins that scrape ads can expose people’s info if misused.”
That argument was met with some skepticism from those affected however with one organization digging into the changes and noting pointedly that Facebook doesn’t “elaborate” on how its claimed security breaches would actually occur.
The third-party tools are intended to give an insight into what ads are being run by political organizations and who they are targeting. Due to Facebook’s extraordinary depth of knowledge on its users, the social network is able to give advertisers extremely precise ways of targeting specific groups based on age, sex, location, political beliefs and much else.
That system has been widely abused by political operatives across the globe who have used Facebook to spread divisive and inaccurate information as a way of driving voters toward a specific action. But thanks to Facebook’s closed system, those messages are not visible beyond the target audience and until recently no archive was kept.
Following hearings and heavy criticism from lawmakers across the globe, including the United States, Facebook vowed to improve the situation and produced a service that allows people to review political ads on its platform.
But that service is woefully inadequate, say researchers, who have identified political ads that were not included in the system simply because the advertiser itself chose not to identify them as such.
In addition, the beta of an invite-only service offered to researchers to dig into the issue is far too restrictive, according to some users, with them required to search for ads with keywords, rather than see a stream of ads that they can then review. That new service also doesn’t allowed researchers to see who was specifically targeted by an ad, a critical component behind misinformation campaigns.
Facebook’s changes remove the ability of third parties to generate their own databases of political ads by deeming computer-generated clicks illegitimate and requiring physical mouse clicks before the details behind an ad are made available. That change removes the automated nature of the tools and one of those organizations affected, ProPublica, has given a detailed rundown of the impact.
Facebook’s Leathern has also infuriated some by tweeting: “We know we have more to do on the transparency front – but we also want to make sure that providing more transparency doesn’t come at the cost of exposing people’s private information.” Online users have pointed out the seemingly endless times that Facebook – a multi-billion-dollar company – has promised to “do better” when it comes to transparency and privacy.
However Facebook’s ex-chief information security officer Alex Stamos, who has been an occasional critic of his former company, sees a degree of hypocrisy in the complaints. “This is another example of the press saying ‘FB needs to stop this bad activity’ and later ‘we didn’t mean by us!'” he tweeted.
He argued that there was “a balancing act that the tech platforms have to walk between data protection (data monopoly, if you prefer) and creating some risk by opening APIs.”
The truth however is that there is an enormous well of distrust between Facebook and researchers/media organizations – caused in large part by the fact that Facebook has repeatedly and aggressively lied about its actions for years and become extremely defensive when challenged.
While Facebook can claim, legitimately, to be concerned about user safety, there is a catalog of things it could have done and not done if it was genuine about transparency and granting independent third parties access to its systems.
It did not discuss the changes with those it knew it was impacting, for example, before imposing them. It also made no effort to look at other ways to achieve the same goal.
All of Facebook’s actions point to its persistent control-freakery: it has set up its own system for storing political ads which it can control and is going out of its way to ensure that no other system works.
It even added code to enable it to track what other services are doing, presumably to figure out what workarounds they may introduce in future so it can shut them down again. That is a targeted effort to prevent independent transparency and no amount of tweets claiming otherwise will change it.
Not that Facebook isn’t making changes, it is. It has limited the ability of apps to grab information they don’t need and has cut off inactive apps from its API program. It has also added a few additional steps in its partnership program to reduce the likelihood of rogue applications designed only to seize people’s data.
But make no mistake, Facebook remains entirely hostile toward any independent organization that wishes to look behind the scenes. Its broken work culture remains the same.