Transatlantic Take

Five Steps to Combat the Infodemic

Photo Credit: Natee Meepian / Shutterstock
GMF’s Digital Innovation and Democracy Initiative released on March 24 a Policy Roadmap

GMF’s Digital Innovation and Democracy Initiative released on March 24 a Policy Roadmap to safeguard the information ecosystem.

Access to reliable information is crucial for a democracy to function, but in the middle of a pandemic its importance is especially evident. The World Health Organization is warning of an “infodemic” of false information on the coronavirus.

We found that of the top ten outlets that repeatedly share false content, eight out of ten are pushing misleading or outright false articles about coronavirus, with headlines such as “STUDY: 26 Chinese Herbs Have a ‘High Probability’ of Preventing Coronavirus Infection” and “Why coronavirus is a punishment from God that should lead to repentance.”

We offer a Five-Point Plan for how policymakers and platforms should address the coronavirus “infodemic” information right now.

  1. Create a fund for local journalism—a public good, essential in a crisis. Would be funded by platform ad revenue.

  • The states that are shutting down essential services have recognized local press as being essential—newsgathering must go on even while everything else shutters. Local news has been decimated by losing ad revenue to Internet platforms. The platforms should return a small portion of their gains via a fee to support the essential service of local reporting.

  • Funds raised would be directed to an independent clearinghouse (like the Corporation for Public Broadcasting) which, working with local partners, would disburse funds to eligible outlets. Eligibility could be limited to outlets that follow journalistic codes of practice (for example, transparency, fact-checking, and corrections), possibly relying on organizations such as the Trust Project, the Credibility Coalition, or NewsGuard.

  • Funds could also be used to highlight and make available local news and critical information, media literacy, civic and voter information, publicly funded scientific research, and government data.

  1. Insist the dominant platforms—Facebook/Instagram, Google/YouTube, Twitter—join with public watchdogs to create a code of conduct to combat today’s confusing company-specific rules that have failed to stem disinformation. Platforms should also release compliance data and submit to audit. The uniform, transparent code should be focused on practices not content.

  • The code should commit to using only accredited fact-checkers and relying on scientific and public health bodies (like CDC and WHO) for the promulgation of sound science. It should set practices for steering users towards verified information and away from conspiracy theories, including rules for what content is penalized and how (e.g. amplification reduction, takedowns).

  • With enhanced enforcement must come a robust, transparent appeals process and the release of data on enforcement actions (in a privacy protected fashion) for monitoring by researchers, civil society, or a government agency.

  1. Platforms should employ user design to highlight credible scientific information and news organizations that follow journalistic standards and heighten the cognitive autonomy of users—so users no longer need operate in digital darkness, unaware when viewing and sharing false content. User interfaces should empower users, offering transparency, information, and options that are accessible and intuitive.

  • News: To be identified as news, linked content should disclose sources of funding and editorial control and comply with journalistic standards of fact-checking and transparency. Platforms should agree to highlight independent, public-interest news outlets that follow such journalistic standards.

  • Algorithms: Platforms should provide greater disclosure of the way algorithms operate, including about how content is curated, ranked, targeted, and recommended.

  • Transparent Design: In addition, users should receive easy to understand information about whether a video or audio has been altered, and user verification should be routine to make it easier to identify, label, remove, and archive bots and fake accounts.

  • Friction: Salubrious friction should be introduced to slow the circulation of harmful information, for example by defaulting out of personal data sharing, defaulting out of endless scroll, and down-ranking non-credible sources.

  1. Hold platforms responsible for harmful viral misinformation. Section 230 immunity should be modified so that the largest platforms are exposed to liability for harmful content that has garnered a certain level of engagement—content that has been hugely amplified and which the platforms have presumably had an opportunity to take down.

  1. The public health emergency highlights the danger of messages, funded by dark money, targeting small audiences. Platforms must finally be required to adopt more ad transparency—including a standardized searchable ad database and know-your-customer procedures to bring dark-money funding into the light—to fact check ads, and to limit ad targeting.

  • Congress should finally pass the bipartisan Honest Ads Act to subject online political ads to the same transparency as on broadcast media and require platforms archive this information in an easily searchable, and sortable database through an application programming interface. Dark-money groups should be required to verify the names of their funders when placing ads.

  • Ads should also be subject to fact checking and removed from Section 230 platform immunity, forcing platforms to take responsibility for any harms that ensure from ads they run.

  • Platforms should limit targeting of political ads to only one level below the geography of the candidate’s constituency (while also ensuring that all qualified candidates are entitled to equal access to mitigate the risk that any limit on microtargeting would harm smaller campaigns).

Download the PDF »