Why All the Facebook Fire? Because it Is the Most Cynical
In a recent column, Kara Swisher observes that sudden congressional action on antitrust and big tech may be too late and orthogonal to what is really needed, which is regulation. We have come to a moment of such urgency because tech has operated almost entirely on a handshake that it would self-regulate. One area of self-regulation that has failed most spectacularly is transparency. And the company that has failed most blatantly is Facebook. None of the major digital platforms provides enough transparency into their advertising and data practices. Facebook’s efforts have been so inadequate as to be cynical.
Faux Ad Transparency
In the wake of the 2016 Russian election meddling and the 2018 Cambridge Analytical scandal, the social media company promised repeatedly to combat disinformation and preserve election integrity worldwide. In the lead up to the 2018 U.S. midterms, and again in May 2019 just prior to European Parliament elections, Mark Zuckerberg championed self-imposed disclosure standards and a new ad archive application program interface (API) which would ostensibly inform the public as to how, why, and by whom it was being targeted via online ads. Facebook’s initial ad library was first unveiled in May 2018 but was then strictly limited to political ads and remained largely unusable to researchers absent a workable API. Facebook said these new efforts would be so effective that independent ad databases and transparency tools would prove superfluous.
None of this is turning out as promised. During the 2018 U.S. midterm elections, purchasers were still able to retain anonymity. In October 2018, Vice News reported being able to game the system, easily sneaking false “paid for by” disclosures through in the name of 100 senators. Then in January, ad transparency tools created by Pro Publica, Mozilla, and WhoTargetsMe inexplicably stopped working. Facebook had covertly inserted code into its platform which blocked the tools and tersely informed Pro Publica after the fact that its browser plugin had violated the terms of service.
Blockading Researcher Access
The following month, a Mozilla-led ad hoc coalition penned an open letter urging Facebook, with the upcoming European Parliament elections in view, to fulfill its promise of combating disinformation. The letter claimed that Facebook was directly and notoriously undermining transparency, contrary to its own purported goals. In conjunction with its call for reform, Mozilla published an outline of its vision of what a “successful” ad archive would look like, including five expert standards, only two of which had Facebook previously fulfilled by Mozilla’s estimation.
Shortly thereafter, Facebook announced the advent of an ad archive API of its own, which it launched in March. Facebook director of product Rob Leathern assured critics of the company’s commitment to “a new level of transparency for ads.” The new tool would allegedly allow all users to search and examine “all active and inactive ads related to politics or issues of importance.”
By the end of April, however, Mozilla and other outside observers had concluded that the new tool was radically inadequate for its purported use. “The fact is,” Mozilla argued, “the API doesn’t provide necessary data. And it is designed in ways that hinder the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation.”
First, the API is apparently only searchable by keywords and lacks filters that would enable searches according to specific criteria. Second, the API provides no information on targeting criteria or engagement data (i.e. likes and shares). Researchers cannot see what types of users an advertiser is trying to influence and whether their attempts were successful.
Mozilla’s foremost complaint is that Facebook’s database does not provide all ad data, which means that “since you cannot download data in bulk and ads in the API are not given a unique identifier, Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing).” It was noted early on by those with access to the beta version that the API was limited to text content and that its functionality was anything but smooth.
Mozilla’s observations have led to a series of speculations and criticism. Experts have noted that Facebook has actually made it harder over time to use the ad archive. It has cracked down on the use of “scraping” tools which forces researchers to search the sprawling database manually and makes it nearly impossible to attain a comprehensive assessment of the ever-growing data sets. What is more, it has remained difficult to keep up with the flood of new ads since Facebook limits the number of searches a research account can make in its archive.
Despite a flurry of activity, it appears that Facebook has made little actual progress in the transparency department. While it pledges increased ad transparency often, its transparency products—the ad archive and API—are designed to frustrate effective inquiry. It is difficult to call this transparency shadow play anything but obstructionist.
Obstinate Failure to Cooperate
Just over a year ago, Mark Zuckerberg appeared before Congress to address the 2016-2017 “tornado of scandal.” He promised improved transparency measures regarding online ads and user data. At the hearings, transparency was thought to be a middle ground, a modest palliative to the power of Big Tech. At the time, Zuckerberg seemed agreeable, even enthusiastic, especially if it meant staving off overbearing, regulation (for example, removing Section 230 protection). It was clear, however, even then that we had not entered a new era of government-platform cooperation. The U.K. Select Committee on Culture, Media, and Sport reported repeated resistance on the part of Facebook, and other Big Tech companies, to its inquiries.
Not only have Facebook’s transparency products been essentially defective, but its openness with governments is at best selective. Sweeping apologies are not matched by effective cooperation. In May 2018, after apologizing to the European Parliament for past ills, Zuckerberg was effectively able to dodge answering any serious questions.
Facebook executives have hailed the recent French proposed framework as a model for EU policy, and Zuckerberg apparently established a cordial relationship with President Emmanuel Macron at the May 2019 Big Tech meeting in Paris. But in the same month, Zuckerberg and Sheryl Sandberg ignored a subpoena by the Canadian parliament to appear for questioning, which could result in the executives being held in contempt of parliament. Canadian lawmakers pointed out the contradiction between Zuckerberg’s earlier op-ed, wherein he was “looking forward to discussing [online issues] with lawmakers around the world,” and his treatment of parliament. The emerging consensus is that Facebook has squandered its chance to lead the world in transparency policy and self-regulation. As John Wonderlich of the Sunlight Foundation told Vice News, “Overall, political advertising is becoming less transparent.” This is, in part, due to a combination of the exponential growth in online advertising generally over the past three years and Facebook’s lackluster efforts to meaningfully improve ad transparency. With the 2020 elections in the United States fast approaching, experts are becoming increasingly worried that social media companies, as well as Google and YouTube, are unprepared for the onslaught of disinformation that some predict will make the preceding four years look like a mere “dress rehearsal.”
These criticisms are echoed in the European Commission’s responses to the Code of Practice Against Disinformation progress reports submitted by the signatories (that is, Google, Facebook, Twitter, and Mozilla) to that self-regulatory compact. Facebook has received the least favorable feedback.
In the January and April reports the story has been the same. While the European Commission has acknowledged that Facebook and the other parties have taken steps toward implementing their commitments, it criticized the company for so stringently limiting the access of outside researchers. It was also apparent to the European Commission that the impact of Facebook’s myriad disclosure and election security policies is difficult to appraise absent a “more detailed breakdown” of hard data, which Facebook had not provided
Impenetrable Content Regulation
Current transparency concerns are not limited to political advertising. Facebook’s internal processes related to content regulation remain impenetrable.
Last year, Facebook established the Data Transparency Advisory Group (DTAG), an independent group of scholars tasked with evaluating whether the metrics shared in Facebook’s Community Standards Enforcement Reports provide “accurate and meaningful measures of Facebook’s content moderation challenges.”
DTAG’s initial findings were published in November 2018, and its first full report was released in May 2019. Therein, the group concludes that more transparency around content moderation is urgently needed. Among its list of suggestions, DTAG advised that Facebook be more open regarding how it applies its community standards and seek opportunities to involve users in enforcement efforts. DTAG recommends releasing metrics pertinent to content regulation decisions and, as the Facebook “Supreme Court” comes into full swing by the end of the year, provide similar data regarding appeals decisions. The report also gave examples of more detailed explanations that could be shown to users who have had content removed. And for the sake of more user involvement, DTAG recommended the establishment of user juries, randomly selected, to assist in deciding cases and an elected parliament of users with the power to define and amend the Community Standards.
The recent French government-commissioned proposed framework for social network regulation draws attention to the intentional opacity on the part of social media firms and their content moderation procedures. This intentional obscurity is partly an inherent feature of the “extreme asymmetry of information” flow, and partly a purposeful design of the firms so as to avoid unwanted scrutiny and accountability.
Though the French proposal is obviously critical of the current state of affairs, it takes pains to not bash social media companies and presents much that is in line with DTAG’s observations and recommendations. Indeed, it is no secret that Facebook has a better relationship with France than it does with the United Kingdom, which has famously dubbed social media executives “digital gangsters.”
The French proposal rejects the “punitive approach” featured in the recent German and Australian laws, which penalize both those who post unlawful content and the platforms that host it. Instead, the proposal suggests that a “preventative” or “compliance approach” is optimal in that it will not incentivize over-censorship and encourages platforms to improve their own pre-existing systems. As in the financial sector, network platforms would not be penalized for the presence of unlawful content per se, but rather only if said platforms had not sufficiently complied with preventative regulations designed to mitigate against such content. The central idea is to incentivize content-moderation mechanisms through normative regulatory standards.
Even the relatively nuanced and moderate French regulatory proposal depends on increased transparency about ads and content moderation. With the competition pressure now on, with the Congressional hearings and Federal Trade Commission scrutiny of Google, we must not forget the regulatory pressure for transparency. Even if the most extreme competition policies are put into place, and we have smaller and more competitive platforms, they will not remain so for long without more transparent data and advertising practices.