Addressing Information Pollution with a “Superfund for the Internet”
This is the fourth installment of a new white paper series on alternative regulatory responses to misinformation. The articles, which were produced with the editing and support of the WIII Initiative, will be presented at a series of events on March 8, 10, and 12th. You can find more information, and registration details for our launch events, here.
- Introduction
Current policy approaches to combat the spread of misinformation in the United States are largely focused on the content moderation practices of digital platforms. But our contemporary news and information ecosystem is complex and interconnected, with quality information and misinformation flowing across both legacy and digital media. Often, misinformation originates in fringe websites, gets amplified by partisan media, and then spun back out online.[1] Platforms are not the source of the problem, but thanks to their advertising-driven business model, which rewards emotional engagement and time-on-device, they benefit from the systems that create and amplify misinformation.[2] Policy strategies exclusively focused on controlling speech on platforms will also inevitably encounter First Amendment hurdles while doing little to actively promote a healthy and robust civic discourse.
This paper offers a creative, evidence-based policy proposal that addresses the twin problems in our information ecosystem: the virulent spread of misinformation on digital platforms, and the crisis in local journalism. It considers the dominance of the major digital platforms in political and social discourse and mandates that they adopt an approach in their content moderation that serves the public interest. Since toxic content on digital platforms is sometimes compared to toxic chemicals dumped by industrial companies into fresh water, our proposal is modeled on the Environmental Protection Agency’s 1980 Superfund to clean up toxic waste sites: a “superfund for the internet.” It would create demand for news analysis and fact-checking services among the major information distribution platforms, and reward supply of such services among reputable news organizations, creating a new revenue stream for the local news organizations that support healthy civic information and discourse.
This paper calls for the platforms, accountable to an independent expert body established through legal mandate, to master the process of identifying, minimizing, and helping the public navigate misinformation — without interfering with constitutionally-protected speech rights. In doing so, it is possible to provide an essential new revenue stream to local journalism organizations who also help protect our public and democratic institutions.
- The Essential Role of Local Journalism in Mitigating Misinformation
Traditionally, local journalism was the primary source of information that is helpful and truthful for communities. A Poynter Media Trust Survey in 2018 found 76 percent of Americans across the political spectrum have “a great deal” or “a fair amount” of trust in their local television news (compared to 55 percent trust in national network news), and 73 percent have confidence in local newspapers (compared to 59 percent in national newspapers).[3] A Gallup survey in 2019 found 74 percent of Americans trust the accuracy of the news and information they get from local news stations (compared to 54 percent for nightly network news programs), and 67 percent trust their local newspapers (compared to 49 percent for national newspapers).[4] A 2019 study from the Knight Foundation’s Trust, Media, and Democracy Initiative with Gallup found that 45% of Americans trust reporting by local news organizations “a great deal” or “quite a lot,” while 15% have “very little” or no trust at all. But in the same survey the public’s view of national news organizations was more negative than positive: only 31% expressed “a great deal” or “quite a lot” of trust, and 38% “very little” or no trust in national news.[5] In aggregate, the data suggests that the viability and availability of local news are important components of a trustworthy information ecosystem, the vital link between informed citizens and a healthy democracy.[6]
Perhaps because of how much they are able to rely on their reporting, 71 percent of Americans think their local news organizations are “doing either somewhat or very well financially.”[7] However, due to years of losses in circulation and advertising revenue to digital platforms (compounded by their own failure to address the changes in readership brought about by digital technology, plus consolidations and cost-cutting brought about by financially-motivated owners), local journalism now faces what has been described as an existential threat.[8] Over the past 15 years, the United States has lost over 2,100 or one-fourth of its local newspapers.[9] About 1,800 of the communities that have lost a paper since 2004 now do not have easy access to any local news source – such as a local online news site or a local radio station.[10] The policy solution presented here serves to create a potential new funding source for local journalism.
- The Regulatory Context for Content Moderation
The “superfund for the internet” sits within a broader legal and regulatory framework for content moderation. There is precedent in the United States for industries to be mandated to take particular actions in the public interest, either to guarantee the availability of certain goods and services, or to address externalities that are not taken into consideration when unregulated firms make their decisions. For example, since the passage of the Communications Act of 1934, it has been recognized that because of the essential role of radio and television licensees in public discourse, the FCC has a duty to use its licensing authority to promote “the public interest, convenience and necessity.”[11] This requires that each station licensee identify the needs and problems of its community of license, and air programming (news, public affairs, etc.) that is responsive to them. This is partially due to the position of telecommunications as an industry essential to democratic self-governance.
The FCC has very limited jurisdiction over digital platforms, and this paper does not recommend its authority be broadened, or that the FCC should oversee the proposal. But the dominance of the platforms in our political and social discourse qualifies them for this same public interest mandate. In other words, the dominant information distribution platforms should be mandated to adopt an approach in their content moderation that serves the public interest. Critically, this must be accomplished in a way that passes First Amendment scrutiny, especially if any government body is involved in the creation or administration of their content moderation approach.
Adoption of a public interest standard for content moderation, whether voluntary or mandated, may represent a significant shift from the platforms’ historical emphasis on free expression. But a new framework is already required: as the platforms have become more active in content moderation and exercise more control over what content their users see, platforms are beginning to migrate from frameworks which are primarily concerned with facilitating the unfettered exchange of ideas to more carefully managing the content they host. This paper proposes the goal be “to create an online information environment that serves the public interest” and that the companies “welcome a political and public debate as to how the public interest is defined.”[12]
Additionally, the optimal content moderation scheme transparently defines community standards that govern content moderation; focuses on conduct relative to the community standards, not content per se, as the basis of decision-making; describes methods of enforcing the standards; and includes an accessible process for appeal. It should also address the highest-priority sources and harms of misinformation, and the fact that some groups are more likely to be the target of, and suffer harm from, misinformation.
- A Proposal for an Internet Superfund
Toxic content on digital platforms has been compared to toxic chemicals dumped by industrial companies into fresh water, and “paranoia and disinformation” dumped in the body politic has been described as “the toxic byproduct of [Facebook’s] relentless drive for profit.”[13] This proposal is modeled on the Environmental Protection Agency’s 1980 Superfund to clean up toxic waste sites: a “superfund for the internet” (hereinafter “Internet Superfund”). The proposal is to use public policy tools to create a market mechanism to clean up information pollution. This market mechanism would consist of demand and payment for news analysis services from the major information distribution platforms, and creation and compensation for supply of these services among qualified news organizations. The payment would be collected from dominant platforms and administered to qualified news organizations by an independent “trust fund” established and administered by the government. The independent government body administering the trust fund would have no role in the actual identification, review, analysis, or action on content.
The Internet Superfund reflects our interconnected information ecosystem as well as a growing body of research on what actually works, in practice, to counter the spread of misinformation. One important concept from the research is that countering misinformation — and more importantly, countering its devastating effects on individual well-being, societal cohesion, and trust in institutions — is as much about producing and elevating accurate and authoritative information as it is about detecting, countering, or removing misinformation.[14] This “information curation” (which encompasses actively finding what is helpful, contextual, and truthful as well as controlling and restraining what is false and harmful) must be accomplished without violating the United States’ constitutional standards.[15] In the best case, it fosters and promotes speech, including from traditionally marginalized groups and voices.
In addition to research on strategies for countering misinformation, this proposal is supported by information gathered from extensive tracking and reporting on the platforms’ efforts to address misinformation related to the COVID-19 pandemic.[16] Even the platforms that had been most resistant to moderation of political content took a harder line on misinformation about the pandemic, on the belief that health information would be objective and politically neutral. But that’s not how it played out: information about the novel coronavirus pandemic became every bit as politicized as what would normally be considered highly partisan topics, subject to the same patterns of creation and distribution as other content designed to sow division and undermine democratic institutions. That makes the pandemic an appropriate model for how platforms can manage other types of misinformation, including overtly political misinformation. Recent research also indicates that citizens concerned about contracting COVID-19 are more likely to support fact-checking, including of politicians.[17]
Research from Public Knowledge showed that — in a situation in which the potential for harm is high and a finite number of sources for authoritative information are in place — platforms can and will set new standards and develop new solutions to problems previously positioned as insoluble. Many of these standards and solutions were replicated or expanded to mitigate misinformation about the 2020 U.S. presidential election. One common strategy enabled their most effective approaches: partnering with authoritative sources of information, news analysis and fact-checking. These partnerships allowed the platforms to evaluate sources, verify the accuracy of claims, up- and down-rank content, label posts, direct people who have experienced misinformation to debunking sites, and understand what kinds of misinformation may create the greatest harm. They also began to demonstrate that the power of “fact-checking” lies not in the fact checks themselves, but in how they are used to change the user experience in ways that deter the spread of misinformation.
But COVID-19 also highlighted the limitations of platforms’ efforts. At best, they can be considered as a model for a new baseline duty of care.[18] Most notably, the platforms’ efforts to counter misinformation about COVID-19 and the presidential election were highly discretionary. That is, it shows that the platforms only apply these disinformation mitigation practices to topics for which they feel the political consequences of doing nothing are greater than the financial losses that would result from doing something. Overall, the existing incentives of digital platforms (including the desire to avoid bad publicity and the desire of advertisers to avoid association with harmful content) are insufficient to address the individual, social and political harms associated with misinformation. Aligning platforms’ incentives with those of the public interest requires policy mechanisms that lower the cost of good behavior and/or raise the cost of bad behavior while not mandating censorship of permissible speech.[19]
The proposal for an Internet Superfund calls for the platforms themselves, accountable to an independent body established through legal mandate, to master the process of identifying, minimizing, and helping the public navigate misinformation in general — without interfering with constitutionally-protected speech rights. The proposal is able to accomplish this by ensuring that the independent body has no role in the selection of content for review, in the news analysis itself, or in the action taken by platforms on content. In doing so, the proposal provides an essential new revenue stream to local journalism organizations, in the form of a new product or service offering that is consistent with their existing journalism and fact-checking skill set. This, in turn, helps promote a healthy and robust civic discourse, which protects trust in, and effectiveness of, our public and democratic institutions.
- An Evidence-Based Approach to Content Moderation
Fact-checking (in the context of information pollution) is the process of evaluating the truthfulness and accuracy of published information by comparing an explicit claim against trusted sources of facts.[20] The most prominent fact-checking organization is the non-partisan International Fact-Checking Network (IFCN), which certifies organizations that successfully apply to be signatories to a Code of Principles. The principles are a series of commitments organizations abide by to promote excellence in fact-checking. They comprise what is essentially a good journalistic process, encompassing principles related to fairness, sourcing, transparency, methodology, and corrections.[21] Most importantly, fact-checking is the foundational enabler to the platforms’ ability to apply labels, make changes in user experience to reduce sharing, demonetize, and when appropriate, “cleanse” or remove false or harmful content or accounts. It is an appropriate role for policymakers and regulators to encourage the development of such services, provide opportunities for platforms and service providers to share information necessary to develop these services, and ensure a competitive market in their development.[22]
Research has shown that flagging of false news may have an effect on reducing sharing of deceptive information on social media.[23] A recent study describing “the state of the art” in measuring the impact of fact-checking demonstrated that fact-checking has a positive impact in reducing misinformation about specific claims related to COVID-19, and that there are ways to extend and optimize fact-checking’s impact.[24] A “Debunking Handbook” written by 22 prominent scholars of misinformation summarized what they believe “represents the current consensus on the science of debunking.” It described the essential role of fact-checking in debunking and unsticking misinformation.[25] Importantly, some pitfalls that came out in earlier research, like the so-called “backfire effect” in which corrections actually strengthened misperceptions, or the “repetition effect” in which false information is reinforced when it is repeated within the correction, have been shown not to exist as robust, measurable empirical phenomena.[26]
The dominant information distribution platforms, including Facebook, Instagram, Google and YouTube, already use fact-checking services.[27] Twitter uses internal systems to monitor content and relies on “trusted partners” to identify content that may cause offline harm.[28] But today, the user experience for fact checks varies widely by platform. Researchers say it is impossible to know how successful or comprehensive the companies have been in removing bogus content because the platforms often put conditions on access to their data. Even the platforms’ own access tools, like CrowdTangle, do not allow filtering for labeled or fact-checked posts.[29] The platforms control virtually every aspect of their interaction with fact-checking organizations, and those organizations have complained that their suggestions for improvements to the process and requests for more information on results go unheeded.[30]
There are strong signs that the platforms’ efforts to mitigate misinformation through the use of fact-checking could be made more effective through an independent oversight process. The authors of this paper have been unable to find any academic or social science study that exactly replicates what actually occurs when the results of fact-checking are used to label content, downrank it, create friction in the sharing experience, notify users of designations after exposure, and enact other strategies that are embedded in the actual user experience.[31] However, there are indications of a critical multiplier effect. Facebook’s website notes that a news story that has simply been labeled false sees its future impressions on the platform drop by 80%.[32] Facebook has also claimed that warnings on COVID-19 misinformation deterred users from viewing flagged content 95% of the time.[33] Twitter reported a 29% decrease in “quote-tweeting” of 2020 election information that had been labeled as refuted by fact-checkers.[34] The proposal for an Internet Superfund would require the qualifying digital platforms to be more transparent in their practices as well as the results associated with them, to share best practices, to share appropriately privacy-protected data with researchers, and to try alternatives from researchers and civil society groups to improve results.[35],[36]
Because of their association with their certifying bodies, as well as their own journalistic brands, fact-checking organizations have collectively “proven themselves as professionally legitimate, trustworthy, and, to some extent, beyond the charges of interest and bias that are often leveled at Facebook.”[37] Despite that, some partisan groups have claimed fact-checking is arbitrary, or an extension of the liberal-leaning editorial bias of the organization doing the checking.[38] This is demonstrably untrue. The fact-checkers themselves come from a range of backgrounds, including journalism but also political science, economics, law, and public policy.[39] In fact, some of the organizations certified by IFCN lean right, such as CheckYourFacts, which is part of the conservative publication The Daily Caller, as well as The Dispatch, which says on its website it is “informed by conservative principles.”[40],[41] Fact-checking is a fast-growing and diverse industry with organizations taking different approaches to fighting misinformation.[42] It is inevitable that some of the mistrust in the media as a public institution has migrated to the fact-checkers.
There may be ways to enhance or supplement the role of fact-checking, such as the addition of media literacy tools that help consumers evaluate the news for themselves.[43] This proposal calls for fact-checking organizations to be independently certified, then allows the platforms to select those that are compatible with their own content moderation standards and their audiences.
- First Amendment Considerations for the Internet Superfund
The First Amendment does not prevent any and every legislative effort to protect individuals, or society as a whole, from harassing or fraudulent content or content that seeks to undermine democracy and civic discourse.[44] However, both the First Amendment and general concerns for freedom of expression require exercising a good deal of care in crafting legislative solutions that have the potential to impact speech.[45] This area of law requires the balancing of many competing policies, and often requires detailed, fact-specific analysis. First Amendment scholars may come to different conclusions when assessing theoretical proposals such as this one.
As a general rule, the First Amendment applies to government initiatives that control or suppress speech, not the role of private companies like social media platforms. Social media platforms are free to establish and enact their own content moderation standards pursuant to their terms of service and community guidelines, and to label or demonetize content as they see fit. Fact-checking itself, as well as resultant actions like warnings, labels, and adding of interstitial content adjacent to posts are examples of the platforms themselves exercising their own speech rights.
One potential challenge stemming from the creation of an Internet Superfund is that it could alter this dynamic, creating a nexus between the platforms’ decision-making and the government, therefore changing how the First Amendment applies. One avenue to safeguard against this challenge is to ensure that the governing body for the Internet Superfund is given no role in the selection, review, evaluation of accuracy, or any other actions on content that the intermediaries might choose to levy. This also significantly limits the potential for government abuse over this process. Rather than requiring the adoption of any particular viewpoint, this mechanism simply requires there be some fact-checking process in place by major information distribution platforms, collects the fees from platforms, and makes the payments to fact-checking organizations.
Another challenge may be in regard to whether the Internet Superfund represents a form of compelled speech, or otherwise places a burden on the platforms’ speech rights. In this proposal, platforms may partner with qualified news analysis organizations that support their own terms of service and content moderation standards, and they retain discretion about whether or how to act on the outcomes of these fact checking processes. The requirement that a fact-checking process be in place is content-neutral, and the involvement of the government in establishing this requirement does not suddenly transform the platform into a state actor.[46]
An assessment of the constitutional fit of this proposal would also require considering whether a court might apply an intermediate or a strict standard of scrutiny. Here, again, the fact that the proposal is not content-based, insofar as it applies without regard to viewpoint or subject matter of the content, argues in favor of an intermediate standard.[47] In defending the proposal against constitutional challenge, the government would need to demonstrate that the statute is narrowly tailored to serve a substantial government interest. Combating misinformation and elevating the availability of quality civic information through local journalism has been shown to be a substantial government interest in and of itself.[48] Depending on the topic, it may also engage government interests in public health (e.g., coronavirus and vaccine misinformation), public safety (e.g., hate speech designed to invoke violence against individuals or groups), or national security (e.g., according to the former general counsel of the National Security Agency, misinformation designed to “sow discord...or undermine confidence in our democratic institutions”).[49] Lastly, the level of scrutiny would depend on whether the court finds that there are still “ample alternative channels for communication of information.”[50] This would involve an inquiry focused on “methods of communication.” This proposal impacts only the largest and most dominant platforms, and it calls for independent content moderation policy and decision-making by platforms in accordance with their public rules.
- Administration of the Internet Superfund
One of the key mechanisms for the Internet Superfund is determining which platforms it addresses and how to assign the financial contributions required from each platform. Several past proposals have called for imposing various forms of taxes - usually based on advertising revenue - on digital platforms for the purpose of creating trust funds to support local journalism.[51] For the market mechanism created by the Internet Superfund, a more appropriate tool is a federal user fee: a “fee assessed to users for goods or services provided by the federal government.”[52] User fees generally apply to federal programs or activities that provide special benefits to identifiable recipients above and beyond what is normally available to the public.[53] Examples of federal user fees include tobacco fees collected by the Food and Drug Administration, filing fees collected by the Securities and Exchange Commission, and the motor vehicle and engine compliance program fee collected by the EPA.[54] Although the public as a whole absolutely benefits from fact-checking and the flourishing of local news media, it is the platforms who receive special benefits. Thus, the platforms would be the payers of the federal user fee.
The standards for platforms’ obligations under the Internet Superfund should be based on their dominant role in communications and the incentives associated with their business models. This proposal suggests that digital platforms should be required to contribute a user fee based on their total number of monthly active users provided that that they:
- Are based in the United States;
- Rely predominantly on locating, indexing, linking to, displaying or distributing third-party or user-generated content for their commercial value, in the form of an advertising business model that generates more than 85% of the company's total annual revenue;
- Have advertising-based revenue exceeding $1 billion annually; and
- Have a total global monthly active base of at least 1 billion users.
This would mean that Google (based on its estimated 1.0 billion monthly active search users), Facebook (2.7 billion), YouTube (2.0 billion) and Instagram (estimated at 1.1 billion) would currently qualify for the fund.[55] Assuming a purely illustrative fee of $1 annually per monthly active user, the Internet Superfund would start as an annual fund of $6.8 billion for information analysis services to clean up the internet. In that case, the total user fees assigned to each platform would represent just 1.6% (Google, including YouTube) or 4.4% (Facebook, including Instagram) of total global corporate revenue.[56] Even a fee of $0.10 per monthly active user, collected from the leading information distribution platforms, would allow over half a billion dollars for information cleanup.
A calculation method based on the number of monthly active users avoids the need to know in advance the quantity of information that will need to be fact-checked, or what proportion of it is "good" or "bad" — it can be assumed that the quantity of misinformation and the potential for harms associated with it increases based on the number of users of each platform. This assumption also avoids the complexity associated with the different quantities of information represented by video, images, or text, as a so-called “bit tax” would introduce. Monthly active users is a nonfinancial performance indicator often used to assess online user growth and engagement among the platforms.[57] The fees could be assessed on a monthly or quarterly basis, accounting for any fluctuations in the number of active users. Use of this metric would provide a strong incentive to standardize its calculation and clean up fraudulent, unidentified, and corrupting accounts on each platform.[58] Lastly, using a fee instead of a tax on advertising avoids having to address fluctuations in spending by advertisers, which may have no correlation with the amount of misinformation flowing across platforms.
The proposed framework for the collection, allocation, and distribution of payments under the Internet Superfund is modeled after the system put in place for the collection of regulatory user fees by the EPA and the allocation and distribution of payments from the EPA Superfund.[59] Payments from platforms would be housed in a federal trust fund, administered by an independent body, and allocated among qualified news organizations certified to cleanse toxic information. Platforms subject to the fees will each have member accounts tied to an Internet Superfund website used to calculate and assess costs. This account will be connected to the federal agency designated to house the Internet Superfund (again, ideally, one with other regulatory authority over platforms).[60]
In order to avoid politicization of the allocations to fact-checking organizations, the funds would be disbursed as fees per hour, amount of content, or number of fact checks completed for the platforms. In other words, a general rate and basis of payment would be established by the federal agency (potentially derived from the payment schedules already in place between platforms and fact-checking organizations), but the actual amount each news organization receives would be dependent on the amount of service exchanged between the platforms and the news organization. Regular audits would be conducted by an independent committee of stakeholders to prevent organizations from taking advantage of the payment system and inflating their fees. In addition to funding news organizations, money from the Internet Superfund will go to administration of the fund itself.
Fact-checking is a fast-growing and diverse industry with organizations taking different approaches to fighting misinformation, so it is necessary platforms are given some discretion to partner with organizations best suited to their individual content moderation policies.[61] Diversity of platform content moderation standards and practices better ensures that all kinds of misinformation can be effectively and efficiently dealt with.[62] However, this does not mean the platforms will be completely without guidance. The designated federal agency will provide the platforms with assistance in choosing qualified fact-checking organizations to partner with and guidelines designed to ensure there is still transparency, autonomy and equity between them.
- Conclusion
The 2020 presidential election and its aftermath have provided more vivid examples of the volume, velocity and potential for harm associated with misinformation on digital platforms - and of the potential power of strategies rooted in fact-checking, such as labeling, addition of interstitial content, deamplification, friction, and changes in user experience.[63] Given the scale and complexity of the problem, it is important to acknowledge that the Internet Superfund would only be one of a system of solutions to create content moderation that supports the public interest. Other proposals may engage competition policy, address reforms to Section 230 of the Communications Act of 1934, or seek to regulate the platforms’ internal mechanisms, like algorithmic amplification or improved training or working conditions for human content moderators.
Lisa H. Macpherson is Senior Policy Fellow at Public Knowledge. The author wishes to thank Will Jennings, Jonathan Walter, Montana Williams and Alex Petros for their invaluable partnership in preparing this paper.
[1] Yochai Benkler, et al., Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election, (Berkman Klein Ctr. for Internet & Soc’y, 2020), https://cyber.harvard.edu/publications/2017/08/mediacloud.
[2] Nathalie Maréchal et al., Getting to the Source of Infodemics: It’s the Business Model, Open Tech. Inst. and New America (May 27, 2020), https://www.newamerica.org/oti/reports/getting-to-the-source-of-infodemics-its-the-business-model/.
[3] Andrew Guess, et al., All Media Trust is Local? Findings from the 2018 Poynter Media Trust Survey (2018).
[4] Megan Brenan, In U.S., 40% Trust Internet News Accuracy, Up 15 Points, Gallup (Aug. 22, 2019), https://news.gallup.com/poll/260492/trust-internet-news-accuracy-points.aspx.
[5] The John S. & James L. Knight Found., State of Public Trust in Local News 12 (2019), https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/440/original/State_of_Public_Trust_in_Local_Media_final_.pdf.
[6] Steven Waldman, Federal Communications Commission, The Information Needs of Communities: The Changing Media Landscape in a Broadband Age (July 2011), https://www.fcc.gov/sites/default/files/the-information-needs-of-communities-report-july-2011.pdf.
[7] Pew Rsch. Ctr., For Local News, Americans Embrace Digital but Still Want Strong Community Connection (Mar. 26, 2019), https://www.journalism.org/2019/03/26/for-local-news-americans-embrace-digital-but-still-want-strong-community-connection/.
[8] Pen America, Losing the News: The Decimation of Local Journalism and the Search for Solutions 24 (2019), https://pen.org/wp-content/uploads/2019/12/Losing-the-News-The-Decimation-of-Local-Journalism-and-the-Search-for-Solutions-Report.pdf.
[9] Penelope Muse Abernathy, The Center for Innovation and Sustainability in Local Media, Hussman School of Journalism and Media, University of North Carolina at Chapel Hill, News Deserts and Ghost Newspapers: Will Local News Survive? 11 (2020) https://www.usnewsdeserts.com/wp-content/uploads/2020/06/2020_News_Deserts_and_Ghost_Newspapers.pdf.
[10] Id.
[11] Telecommunications Act of 1934, Pub. L. No. 74-415, 48 Stat. 1064 (1934) (codified as amended at 47 U.S.C. §§ 151-646).
[12] Joel Simon, For Big Tech, What Follows ‘Free Expression’?, Colum. Journalism Rev. (Nov. 5, 2020), https://www.cjr.org/opinion/big-tech-senate-hearing-free-expression.php.
[13] Roger McNamee, Facebook Cannot Fix Itself. But Trump’s Effort to Reform Section 230 is Wrong, Time (June 4, 2020), https://time.com/5847963/trump-section-230-executive-order/; Jamelle Bouie, Facebook Has Been a Disaster for the World, N.Y. Times (Sept. 18, 2020), https://www.nytimes.com/2020/09/18/opinion/facebook-democracy.html.
[14] Stephan Lewandowski & John Cook, The Conspiracy Theory Handbook (2020), https://www.climatechangecommunication.org/wp-content/uploads/2020/03/ConspiracyTheoryHandbook.pdf; Kathleen Hall Jamieson & Dolores Albarracin, The Relation Between Media Consumption and Misinformation at the Outset of the SARS-CoV-2 Pandemic in the US, Harv. Kennedy School (HKS) Misinformation Rev. (2020), https://misinforeview.hks.harvard.edu/article/the-relation-between-media-consumption-and-misinformation-at-the-outset-of-the-sars-cov-2-pandemic-in-the-us/.
[15] Joan Donovan, You Purged Racists From Your Website? Great, Now Get to Work, Wired (July 1, 2020), https://www.wired.com/story/you-purged-racists-from-your-website-great-now-get-to-work/.
[16] Lisa Macpherson & Kathleen Burke, Public Knowledge, How Are Platforms Responding to This Pandemic? What Platforms Are Doing to Tackle Rampant Misinformation (2020), https://misinfotrackingreport.com/wp-content/uploads/2020/06/FinalMisinfoReport.pdf.
[17] Timothy S. Rich, et. al, Research Note: Does the Public Support Fact-Checking on Social Media? It Depends on Who and How You Ask, Harv. Kennedy School (HKS) Misinformation Rev. (2020), https://misinforeview.hks.harvard.edu/article/research-note-does-the-public-support-fact-checking-social-media-it-depends-who-and-how-you-ask/.
[18] In a privacy context, “duty of care” has been proposed to refer to the prohibition of covered entities from causing foreseeable injuries to individuals. See Cameron F. Kerry et al., Brookings, Bridging the Gaps: A Path Forward to Federal Privacy Legislation (2020), https://www.brookings.edu/wp-content/uploads/2020/06/Bridging-the-gaps_a-path-forward-to-federal-privacy-legislation.pdf. See also Data Care Act of 2019, S. 2961, 116th Cong. (2019) (proposed Senate bill establishing “duties for online service providers with respect to end user data that such providers collect and use”). Here, the idea of a “duty of care” refers to the obligation of covered digital platforms to apply a standard of reasonable care in regard to content that could foreseeably harm others. The goal is similar to that of a fiduciary model; that is, to change how digital platforms think about their end users and their obligations to their end users.
[19] Harold Feld, The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms 135 (2019).
[20] Alexios Mantzarlis, Will Verification Kill Fact-Checking?, Poynter (Oct. 21, 2015), https://www.poynter.org/fact-checking/2015/will-verification-kill-fact-checking/.
[21] Poynter, IFCN Code of Principles, https://www.ifcncodeofprinciples.poynter.org/know-more/the-commitments-of-the-code-of-principles (last visited Sept. 21, 2020).
[22] Supra note 19, at 168.
[23] Paul Mena, Cleaning Up Social Media: The Effect of Warning Labels on Likelihood of Sharing False News on Facebook, 12 Policy & Internet 165 (2019), https://onlinelibrary.wiley.com/doi/abs/10.1002/poi3.214.
[24] Tracie Farrell et al.,, Assessment of the online spread of coronavirus information, The HERoS Project, Assessment of the Online Spread of Coronavirus Information (September, 2020), https://www.heros-project.eu/wp-content/uploads/Assessment-of-the-online-spread-of-coronavirus-misinformation.pdf (last visited Dec 18, 2020).
[25] Stephen Lewandowsky, et al., Debunking Handbook 2020, https://sks.to/db2020.
[26] Briony Swire-Thompson, et al., Searching for the Backfire Effect: Measurement and Design Considerations, 9 J. Applied Rsch. Memory & Cognition 286 (2020). https://doi.org/10.1016/j.jarmac.2020.06.006.
[27] Fact-Checking on Facebook, Facebook, https://www.facebook.com/business/help/2593586717571940?id=673052479947730 (last visited Sept. 21, 2020); Why is a Post on Instagram Marked as False Information?, Instagram, https://www.facebook.com/business/help/2593586717571940?id=673052479947730 (last visited Sept. 21, 2020); Find Fact Checks in Search Results, Google, https://support.google.com/websearch/answer/7315336 (last visited Sept. 21, 2020); See Fact Checks in YouTube Search Results, YouTube, https://support.google.com/youtube/answer/9229632 (last visited Sept. 21, 2020).
[28] Yoel Roth & Nick Pickles, Updating Our Approach to Misleading Information, Twitter (May 11, 2020), https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html.
[29] Tommy Shane, Searching for the Misinformation ‘Twilight Zone’, Medium (Nov. 24, 2020), https://medium.com/1st-draft/searching-for-the-misinformation-twilight-zone-63aea9b61cce
[30] Daniel Funke & Alexios Mantzarlis, We Asked 19 Fact-Checkers What They Think of Their Partnership with Facebook. Here’s What They Told Us, Poynter (Dec. 14, 2018), https://www.poynter.org/fact-checking/2018/we-asked-19-fact-checkers-what-they-think-of-their-partnership-with-facebook-heres-what-they-told-us/.
[31] Emily Saltz, et al., It Matters How Platforms Label Manipulated Media. Here are 12 Principles Designers Should Follow, Medium (2020). https://medium.com/swlh/it-matters-how-platforms-label-manipulated-media-here-are-12-principles-designers-should-follow-438b76546078.
[32] Craig Silverman, Facebook Says Its Fact Checking Program Helps Reduce the Spread of a Fake News Story by 80%, Buzzfeed News (Oct. 11, 2017), https://www.buzzfeednews.com/article/craigsilverman/facebook-just-shared-the-first-data-about-how-effective-its.
[33] Elizabeth Culliford, TikTok Faces Another Test: Its First U.S. Presidential Election, Reuters (Sept. 17, 2020), https://uk.reuters.com/article/uk-usa-election-tiktok-misinformation-in-idUKKBN2682PP.
[34] Vijaya Gadde and Kayvon Beykpour, An Update on Our Work Around the 2020 US Election, Twitter (Nov. 12, 2020), https://blog.twitter.com/en_us/topics/company/2020/2020-election-update.html.
[35] Irene V. Pasquetto, et al., Tackling misinformation: What researchers could do with social media data, Harv. Kennedy School (HKS) Misinformation Rev. (2020), https://misinforeview.hks.harvard.edu/article/tackling-misinformation-what-researchers-could-do-with-social-media-data/
[36] Nick Statt, Major Tech Platforms Say They’re ‘Jointly Combating Fraud and Misinformation’ About COVID-19, The Verge (Mar. 16, 2020), https://www.theverge.com/2020/3/16/21182726/coronavirus-covid-19-facebook-google-twitter-youtube-joint-effort-misinformation-fraud.
[37] Mike Ananny, The Partnership Press: Lessons for Platform-Publisher Collaborations as Facebook and News Outlets Team to Fight Misinformation, Colum. Journalism Rev. (Apr. 4, 2018), https://www.cjr.org/tow_center_reports/partnership-press-facebook-news-outlets-team-fight-misinformation.php.
[38] Craig Timberg & Andrew Ba Tran, Facebook’s Fact-Checkers Have Ruled Claims in Trump Ads Are False – But No One is Telling Facebook’s Users, Wash. Post (Aug. 5, 2020), https://www.washingtonpost.com/technology/2020/08/05/trump-facebook-ads-false/.
[39] Lucas Graves & Federica Cherubini, Digital News Project, The Rise of Fact-Checking Sites in Europe (2016), https://reutersinstitute.politics.ox.ac.uk/sites/default/files/research/files/The%2520Rise%2520of%2520Fact-Checking%2520Sites%2520in%2520Europe.pdf.
[40] Poynter Accepts The Daily Caller’s ‘Check Your Fact’ Site To International Fact-Checking Network, The Daily Caller,
[41] About, The Dispatch, https://thedispatch.com/about.
[42] Emily Bell, The Fact-Check Industry (2019), https://www.cjr.org/special_report/fact-check-industry-twitter.php.
[43] Harold Feld & Jane Lee, Part V: We Need to Fix the News Media, Not Just Social Media — Part 3, Public Knowledge (Oct. 20, 2018),
[44] Illustratively, the existence of the “Public Forum Doctrine” and specific cases arising from the regulation of electronic media by the FCC make clear that viewpoint-neutral federal regulation designed to promote access to competing sources of news and a diversity of views and opinions is permitted by the Constitution. See Turner v. FCC (Turner I), 512 U.S. 622 (1994) (finding support for local news was "content neutral" because the government did not favor particular content, but simply continued a long federal policy of promoting access to news and important community perspectives). See also Daniels Cablevision v. FCC, 835 F.Supp 1, 21 (D.D.C. 1993) (holding PEG requirement content neutral and thus survives intermediate scrutiny), aff’d on other grounds sub nom. Time Warner Entertainment Co., L.P. v. FCC, 93 F.3d 957 (D.C. Cir. 1996).
[45] Supra note 19, at 6. To be clear, the fact that certain speech is protected by the First Amendment does not eliminate the possibility of legal liability for the speech under various alternative theories. For example, speakers may still be held liable for libel.
[46] See Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921 (2019) (holding private operators of public access cable channels are not state actors).
[47] See Ward v. Rock Against Racism, 491 U.S. 781, 791-794 (1989).
[48] See Turner Broadcasting Systems, Inc. v. F.C.C., 512 U.S. 622 (1994) (finding “promoting the widespread dissemination of information from multiple sources” to be an important government interest).
[49] Tackling Disinformation is National Security Issue Says Former NSA General Counsel, CBS News (Dec. 16, 2020) https://www.cbsnews.com/news/tackling-disinformation-is-national-security-issue-says-former-nsa-general-counsel.
[50] See Ward, 491 U.S. at 791; See also Turner Broadcasting Systems, Inc. v. F.C.C., 512 U.S. 622 (1994) (finding the FCC’s must-carry provisions to be content-neutral and the “editorial discretion” of the broadcasters to be a speech interest that permits intermediate scrutiny).
[51] See Victor Pickard, Journalism’s Market Failure is a Crisis for Democracy, Harvard Business Review (Mar. 12, 2020), https://hbr.org/2020/03/journalisms-market-failure-is-a-crisis-for-democracy (proposing taxation of platforms like Facebook and Google to fund public media); Frank Blethen, Save the Free Press Initiative Seeks Solutions for Local Journalism, Seattle Times (May 1, 2020), https://www.seattletimes.com/opinion/save-the-free-press-initiative-seeks-solutions-for-local-journalism/ (proposing a “Free Press” trust or superfund funded by a fee on the ad revenue of major internet platforms); Karen Kornbluh & Ellen P. Goodman, Five Steps to Combat the Infodemic, The German Marshall Fund of the U.S.: Transatlantic Takes (Mar. 26, 2020), https://www.gmfus.org/sites/default/files/Kornbluh%20Goodman%20_5%20steps%20to%20combat%20the%20infodemic.pdf (proposing a fund for local journalism); Ethan Zuckerman, The Case for Digital Public Infrastructure, Knight First Amendment Institute (Jan. 17, 2020), https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure (proposing a 1 percent tax on “highly surveillant advertising” to create an endowment used as a subsidy for independent journalism); Paul Romer, A Tax That Could Fix Big Tech, N.Y. Times (May 6, 2019), https://www.nytimes.com/2019/05/06/opinion/tax-facebook-google.html (proposing a tax on revenue from sales of targeted digital ads to “encourage platform companies to shift toward a healthier, more traditional model”); Berggruen Institute, Renewing Democracy in the Digital Age (Mar. 2020), https://www.berggruen.org/activity/renewing-democracy-in-the-digital-age/ (proposing a tax based on profit from advertising revenue to expand newsroom revenue streams); Russell Brandom, Bernie Sanders Endorses a Targeted Advertising Tax to Fund Local Journalism, The Verge (Aug. 27, 2019), https://www.theverge.com/2019/8/27/20835018/bernie-sanders-targeted-advertising-tax-google-facebook-journalism (proposing a targeted ad tax used to fund “civic-minded media”).
[52] D. Andrew Austin, Cong. Rsch. Serv., Economics of Federal User Fees (2019) [available at https://fas.org/sgp/crs/misc/R45463.pdf].
[53] Id.
[54] U.S. Gov’t Accountability Off., Federal User Fees: Key Considerations for Designing and Implementing Regulatory Fees 4 (2015) [available at https://www.gao.gov/assets/680/672572.pdf].
[55] Christo Petrov, The Stupendous World of Google Search Statistics, TechJury (July 27, 2020), https://techjury.net/blog/google-search-statistics/#gref; Facebook Reports Second Quarter 2020 Results, Facebook (July 30, 2020), https://investor.fb.com/investor-news/press-release-details/2020/Facebook-Reports-Second-Quarter-2020-Results/default.aspx; YouTube for Press, Youtube, https://www.youtube.com/about/press/ (last visited Sept. 10, 2020); Jenn Chenn, Important Instagram Stats You Need to Know for 2020, Sprout Social (Aug. 5, 2020), https://sproutsocial.com/insights/instagram-stats/.
[56] Calculated using the monthly active users noted for each platform, the illustrative fee of $1 per monthly active users, and F’20 year-end revenue for Google (which includes YouTube) and Facebook (which includes Instagram). See Facebook Reports Fourth Quarter and Full Year 2020 Results, Facebook (January 27, 2021), https://investor.fb.com/investor-news/press-release-details/2021/Facebook-Reports-Fourth-Quarter-and-Full-Year-2020-Results/default.aspx; Alphabet Announces Fourth Quarter and Fiscal Year 2020 Results, Google, February 2, 2021, https://abc.xyz/investor/static/pdf/2020Q4_alphabet_earnings_release.pdf
[57] The measure of monthly active users remains in wide use despite concerns that it is an unaudited measure, does not consistently correlate with future financial performance, and is calculated differently among platforms. See Theresa F. Henry, David A. Rosenthal & Rob R. Weitz, Socially awkward: social media companies’ nonfinancial metrics can send a mixed message, 218(3) J.A. 52 (2014), https://link.gale.com/apps/doc/A381838689/LT.
[58] Illustratively, Facebook reports it disabled 1.7 billion fake accounts in the first quarter of 2020, and estimates that as much as 5% of its worldwide monthly active users during Q4 2019 and Q1 2020 were also fake. See Community Standards Enforcement Report, Facebook: Transparency, Facebook (Aug. 2020), https://transparency.facebook.com/community-standards-enforcement#fake-accounts.
[59] Bryan Anderson, Taxpayer Dollars Fund Most Oversight and Cleanup Costs at Superfund Sites, Wash. Post (Sept. 20, 2017), https://www.washingtonpost.com/national/taxpayer-dollars-fund-most-oversight-and-cleanup-costs-at-superfund-sites/2017/09/20/aedcd426-8209-11e7-902a-2a9f2d808496_story.html.
[60] See Tom Wheeler, Phil Verveer & Gene Kimmelman, New Digital Realities; New Oversight Solutions in the U.S.: The Case for a Digital Platform Agency and a New Approach to Regulatory Oversight, The Shorenstein Center on Media, Politics, and Public Policy, Harvard Kennedy School 6 (Aug. 2020), https://shorensteincenter.org/wp-content/uploads/2020/08/New-Digital-Realities_August-2020.pdf (discussing the need for a Digital Platform Agency to conduct oversight of digital platform market activity).
[61] Bell, supra note 42 (discussing the increase in fact-checking organizations between 2016 and 2019).
[62] Evelyn Douek, The Rise of Content Cartels, Knight First Amendment Institute (Feb. 11, 2020), https://knightcolumbia.org/content/the-rise-of-content-cartels (examining some of the costs associated with collaboration and standardized content moderation practices).
[63] Kurt Wagner, Sarah Frier & Mark Bergen, Social Media’s Election Report Card, Bloomberg (Nov. 5, 2020), https://www.bloomberg.com/news/newsletters/2020-11-05/social-media-s-election-misinformation-report-card.