国产视频

Introduction

As the pandemic death toll continued to rise in the spring of 2020, myriad myths and conspiracy theories circulated, including some dangerous falsehoods, about COVID-19 on Twitter and other social media: mobile 5G technology helps spread the virus; zinc, celluloid silver, miracle mineral solution, and garlic can cure the virus; hydroxychloroquine has a 100 percent success rate in treating the virus鈥攊ngesting bleach may help; and vaccines are being developed with microchip tracking technology funded by Bill Gates.1

Misinformation can be both polarizing and deadly. It is disturbing that social media companies failed to remove posts promoting the falsehoods described above, despite the companies鈥 commitments to remove COVID-19 misinformation. With so little known about the novel coronavirus and how to stop its spread, conspiracy theorists, hucksters, political opportunists, and would-be authoritarians worldwide have adopted tactics honed for recent political campaigns to further exploit social media鈥檚 algorithmic toolset, using platforms鈥 targeted advertising systems to amplify unfounded virus transmission theories and remedies.

Policymakers have proposed holding companies liable for speech rather than for their targeted advertising business models, the real source of the problem.

Social media platforms are part of a broader information ecosystem that also includes news organizations and other influential actors, including political leaders and celebrities. The growing impact of social media platforms on our information environment is fueled by advertising in general, and targeted advertising in particular. Fifty-five percent of American adults say they get their news from social media.2 That includes information about political candidates and policy issues from ads on Facebook and Google. In 2018, those two companies combined receive almost 60 percent of all digital advertising dollars in the United States.3 They are projected to earn 77 percent of the more than $1.34 billion that is expected to be spent on U.S. digital political advertising in the 2019鈥2020 election cycle.4 Before deciding to ban political advertising last fall, Twitter earned $3 million from political ads during the 2018 midterm elections.5 Social media platforms draw attention and advertising dollars away from news outlets whose journalism also includes critical facts without which citizens cannot make informed choices about how to live their lives鈥攐r to vote.

But digital platforms鈥 increasing share of news and advertising is only the beginning of the story. Since the 2016 U.S. presidential election, companies like Facebook, Google, and Twitter have struggled to curtail the circulation of both organic and paid false information on their platforms. The content moderation and takedown systems that social media companies have put in place to enforce their content rules are failing society as well as individual users: harmful misinformation continues to flow; while content moderation rules are enforced in inconsistent and often arbitrary ways, resulting in unintended deletion of content produced by activists and journalists, thereby restricting freedom of expression.6 Company systems intended to block coronavirus misinformation from ad platforms have been proven ineffective.

In parallel, policymakers have begun to address the problem of dangerous online content, particularly by formulating proposals to amend the extent to which companies are legally liable for user-generated speech on their platforms.7 But such proposals do not address what makes these platforms different鈥攁nd so potentially dangerous: their targeted advertising business models and the algorithmic systems that drive them.

Targeted advertising business models require extensive data collection and algorithmic content shaping in order to maximize targeted advertising revenue. Designed to prioritize sensational, eye-catching, and controversial content, they disproportionately amplify organic and paid speech that has great potential to corrupt the quality of information we need to promote not just healthy democracies and fair elections, but also, as we鈥檝e seen in the COVID-19 pandemic, public health. At the same time, targeted advertising enables paying customers to target different types of content, including ads, at specific audiences based on people鈥檚 demographics and declared interests as well as algorithmically inferred assumptions about other affinities and traits that may not even be correct.

Content moderation is necessary, but far from sufficient.

Against the backdrop of both a global pandemic and a U.S.-presidential election campaign, it is now incontrovertible that strengthening platform accountability and, thus, the integrity and resilience of our information ecosystem, is critical to the future of democracy. This report, which offers concrete U.S. policy recommendations, is the second in a two-part series examining how targeted advertising business models can drive the spread of misinformation8 and dangerous speech,9 and what U.S. lawmakers and regulators can do to hold companies accountable for these systems without infringing on human rights. We make the case for why and how policymakers and advocates should prioritize holding platforms accountable for mechanisms, policies, and practices that enable the amplification and targeting of user-generated misinformation and dangerous speech鈥攚ithout which the speech would have less reach and, thus, fewer negative effects鈥攊nstead of holding the companies liable for the speech itself.

Some policymakers are focused on breaking up these huge companies as the key to limiting social harms they can cause or contribute to. Antitrust is not a focus of this report, though encouraging competition, ensuring interoperability, and potentially breaking up monopolies are important parts of the policy toolkit that have the potential to alleviate some of the problems that are exacerbated by the major platforms鈥 enormous scale. But these regulatory interventions alone will not be sufficient. Checking the power of Big Tech will require a two-pronged approach outside the context of antitrust, and competition law more broadly, that aims to mitigate the harms posed by the underlying ad- and attention-driven business model that drives platform revenue. That approach involves: 1) a comprehensive and enforceable federal data privacy regulation, and 2) holding companies directly accountable by shining sunlight on the harms created by the business model and promoting policies that increase the fairness, accountability, and overall transparency of those practices.

Having outlined in part one the challenges to both human rights and civil liberties targeted advertising and algorithmic systems present, this second report focuses on what social media companies and, in particular, policymakers can do to address them. We borrow a metaphor from the oil industry to clarify our contention: We cannot clean up downstream pollutants like misinformation or dangerous speech without tackling the upstream processes鈥攖argeted advertising and algorithmic systems鈥攖hat make this speech so damaging to our information environment in the first place. Focusing on the downstream effects of the infodemic,10 as has been the approach thus far, does nothing to address its upstream structural causes.

We cannot clean up downstream pollutants like misinformation or dangerous speech without tackling upstream processes like targeted advertising and algorithmic systems.

In the first report, , we examined two overarching types of algorithms: 1) content-shaping algorithms that determine what individuals users see when they use a company鈥檚 online services (including those that target ads) and 2) content moderation algorithms that help human reviewers identify (and sometimes remove) content that violates the company鈥檚 rules.11 We then gave examples of how these technologies are used both to propagate and prohibit different forms of online speech (including targeted ads), and showed how they can cause or catalyze social harm, particularly in the context of the 2020 U.S. election. We described how users are profiled, segmented, and targeted in ways that allow advertisers to continually reinforce a specific message to a particular type of audience. We illustrated how, when this capability is combined with mis- or disinformation, such as incorrect voting information or outright lies about a candidate, for political gain, the results can be disastrous for democracy. 聽

We also highlighted what we聽don鈥檛聽know about these systems. We called on companies to be much more transparent about how their content-shaping algorithms and content moderation systems work, and to give users more control over how content is being prioritized and promoted to them, or targeted at them. We explained why a regulatory focus on holding companies liable for content shared by users, on its own, will not succeed in stemming the spread of problematic content, and will likely result in the violation of users鈥 free expression rights. We agreed that content moderation is necessary, but far from sufficient, and we asserted that the first step in addressing the problem is to require much greater transparency and accountability around a business model that relies on algorithmic curation and exploitation of user data.

Identifying and removing misinformation and disinformation, and otherwise working to mitigate its impact by flagging it as false is an essential short-term measure. But as we pointed out in the previous report, it is important not to force companies to censor higher volumes of content across a broad range of topics, languages, and cultural contexts when their moderation systems lack the accuracy, consistency, and nuance to avoid violating users鈥 right to freedom of expression and information. But infodemics will keep plaguing us鈥攁nd may get worse鈥攗nless Congress acts, and also empowers other stakeholders including institutional investors to hold companies accountable.

Building on five years of research for the Ranking Digital Rights Corporate Accountability Index (RDR Index), which evaluates how transparent companies are about their policies and practices that affect online freedom of expression and privacy, this report reinforces the case for adopting a human rights framework for platform accountability and proposes two concrete areas where U.S. Congress needs to act to mitigate the harms of misinformation and other dangerous speech without compromising free expression: federal privacy law and corporate governance reform.

In August 2019, the Business Roundtable published a statement signed by 181 CEOs of the major U.S. corporations, announcing their commitment to the idea that the purpose of business is no longer only to serve shareholders, but also to 鈥渃reate value for all our stakeholders鈥 including employees, customers, and communities.12 It is no longer debatable whether businesses in any sector should be held accountable for their social impact.

The proliferation of misinformation during the COVID-19 pandemic has shown just how high the human cost鈥攁nd ultimately the economic cost鈥攃an be when companies prioritize shareholder returns over all else, and when the government fails to hold companies accountable to the public interest.13 Society is now paying the price for failing to require that companies make credible efforts to understand and track their social impact, and to take responsibility for preventing and mitigating social harms that their business may cause or contribute to. It is time to adjust course and design a resilient and equitable information environment鈥攖hrough increased transparency; responsive, evidence-based regulation; and persistent stakeholder engagement鈥攖hat protects human rights and civil liberties especially in times of crisis and change.

Citations
  1. Newsguard. 2020. Tracking Twitter鈥檚 COVID-19 Misinformation 鈥淪uper-Spreaders.鈥 (May 15, 2020).
  2. Khalid, Amrita. 2019. 鈥淎mericans Can鈥檛 Stop Relying on Social Media for Their News.鈥 Quartz. (May 15, 2020).
  3. Sterling, Greg. 2019. 鈥淎lmost 70% of Digital Ad Spending Going to Google, Facebook, Amazon, Says Analyst Firm.鈥 Marketing Land. (May 17, 2020).
  4. Gibson, Kate. 2020. 鈥淪pending on U.S. Digital Political Ads to Top $1 Billion for First Time.鈥 CBS News. (May 15, 2020).
  5. Conger, Kate. 2019. 鈥淭witter Will Ban All Political Ads, C.E.O. Jack Dorsey Says.鈥 The New York Times. (May 15, 2020).
  6. Gary, Jeff, and Ashkan Soltani. 2019. 鈥淔irst Things First: Online Advertising Practices and Their Effects on Platform Speech.鈥 Knight First Amendment Institute. (February 13, 2020).
  7. Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. (May 7, 2020).
  8. Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Strasbourg: Council of Europe. Council of Europe report.
  9. Dangerous Speech Project. 2016. 鈥淲hat Is Dangerous Speech?鈥 Dangerous Speech Project. (May 15, 2020).
  10. The World Health Organization (WHO) defines an infodemic as 鈥渁n over-abundance of information鈥攕ome accurate and some not鈥攖hat makes it hard for people to find trustworthy sources and reliable guidance when they need it.鈥 See World Health Organization. 2020. Novel Coronavirus (2019-NCoV) Situation Report – 13. Geneva: World Health Organization.
  11. Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. (May 7, 2020).
  12. Business Roundtable. 2019. 鈥淏usiness Roundtable Redefines the Purpose of a Corporation to Promote 鈥楢n Economy That Serves All Americans.鈥欌 (May 15, 2020).
  13. Goodman, Peter S. 2020. 鈥淏ig Business Pledged Gentler Capitalism. It鈥檚 Not Happening in a Pandemic. – The New York Times.鈥 The New York Times. (May 15, 2020).

Table of Contents

Close