Getting to the Source of Infodemics: It鈥檚 the Business Model
Table of Contents
- Executive Summary
- Introduction
- Targeted Advertising and COVID-19 Misinformation: A Toxic Combination
- Human Rights: Our Best Toolbox for Platform Accountability
- Making All Ads 鈥淗onest鈥 Through Transparency, Limited Targeting, and Enforcement
- By Protecting Data, Federal Privacy Law Can Reduce Algorithmic Targeting and the Spread of Disinformation
- Good Content Governance Requires Good Corporate Governance
- Without Civil Society, Platform Accountability is a Pipe Dream
- Key Recommendations for Policymakers
- Conclusion
Abstract
Ranking Digital Rights (RDR) content is no longer updated on 国产视频鈥檚 website. RDR is now located at the.
Acknowledgments
This report was written by Ranking Digital Rights (RDR) Senior Policy and Partnerships Manager Nathalie Mar茅chal, Founding Director Rebecca MacKinnon, and Director Jessica Dheere.
Research Director Amy Brouillette, Company Engagement Lead Jan Rydzak, Communications Manager Kate Krauss, Communications Associate Hailey Choi, and Editorial Consultant Nonna Gorilovskaya also contributed to this report.
We wish to thank all the reviewers who provided their feedback on the report. Their inclusion here does not mean that they support or agree with any of the report鈥檚 statements, proposals, recommendations or conclusions:
- Dunstan Allison-Hope, Vice-President, BSR
- Sharon Bradford Franklin, Policy Director, Open Technology Institute
- Melissa Brown, Partner, Daobridge Capital Limited & Advisor, Ranking Digital Rights
- Bennett Freeman, Co-Founder, Global Network Initiative & Board Secretary, 2010-2020
- David Kaye, University of California Irvine & UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression
- Gaurav Laroia, Senior Policy Counsel, Free Press
- Eric Null, U.S. Policy Manager and Global Policy Counsel, Access Now
- Courtney Radsch, Advocacy Director, Committee to Protect Journalists
- Spandana Singh, Policy Analyst, Open Technology Institute
- Amie Stepanovich, Executive Director, Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado Law School
Original art by Pawe艂 Kuczy艅ski.
We would also like to thank Craig Newmark Philanthropies for making this report possible.
To contact us about this report, please email info [AT] rankingdigitalrights.org or comms [AT] rankingdigitalrights.org
For a full list of current and former project funders and partners, please see: .
For more about RDR鈥檚 vision, impact, and strategy see:
For more about 国产视频, please visit .
For more about the Open Technology Institute, please visit .
This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, please visit .
Downloads
Executive Summary
Before the COVID-19 pandemic hit, democracies were already struggling to address disinformation, hate, extremism, and other dangerous online content while also protecting free speech and privacy. Now, Facebook, Twitter, and Google鈥檚 YouTube are awash with disinformation and misinformation that can be deadly. Despite the companies鈥 commitment to take unprecedented steps to control the problem, they are failing.
This report argues that Facebook, Twitter, and Google鈥檚 targeted advertising business models, and the opaque algorithmic systems that support them, are the root cause of their failure to staunch the flow of misinformation.
The second in a two-part series aimed at U.S. policymakers and anybody concerned with the question of how internet platforms should be regulated, this report reinforces the need to adopt a human rights framework for platform accountability. We propose concrete areas where Congress needs to act to mitigate the harms of misinformation and other dangerous speech without compromising free expression and privacy: transparency and accountability for online advertising, starting with political ads; federal privacy law; and corporate governance reform.
First, we point to concerning examples from the pandemic that highlight the connection between targeted advertising and misinformation about the coronavirus and its purported remedies. We explain how international human rights standards provide a framework for holding social media platforms accountable that complements existing U.S. law and can help lawmakers determine how best to regulate these companies without curtailing users鈥 rights.
Drawing on our five years of research for the , we then point to concrete ways that the three social media giants have failed to respect users鈥 human rights as they deploy targeted advertising business models and algorithmic systems. We describe how the absence of data protection rules enables the unrestricted use of algorithms to make assumptions about users that determine what content they see and what advertising is targeted to them. We note that such targeting can result in discriminatory practices as well as the amplification of misinformation and harmful speech.
Next, we explain why expanding the transparency requirements that currently apply to print and broadcast political ads to all types of online advertising is a prerequisite for oversight and accountability, and how a robust federal privacy law can help fight misinformation and dangerous speech, while acknowledging enforcement challenges.
The final section makes a case for corporate governance reform. We explain how trends in environmental, social, and governance (ESG) investing are prompting companies to adopt due diligence and impact assessment standards, and can strengthen corporate governance over the long term. In light of investors鈥 growing interest in holding companies accountable for their social impact, we describe the role Congress can play in mandating corporate disclosure of information pertaining to the social and human rights impact of targeted advertising and algorithmic systems. We also recommend actions that could be taken by the U.S. Securities and Exchange Commission (SEC) to empower shareholders. We again highlight enforcement challenges while also noting strides made by European companies that, if ignored, may portend a loss in global market share for American companies that do not follow.
To conclude, we offer some thoughts about how civil society stakeholders, including researchers, journalists, and advocacy and grassroots organizations, are critical to addressing accountability gaps, especially in the absence of effective regulation and oversight. We also explain why companies must proactively engage with civil society as a part of their efforts to mitigate the negative social impacts of their business models.
Key Recommendations for U.S. Policymakers
Enact federal privacy law that protects people from the harmful impact of targeted advertising.听Specifically, this law should:
- Ensure effective enforcement by designating an existing federal agency, or create a new agency, to enforce privacy and transparency requirements applicable to digital platforms.
- Include strong data-minimization and purpose limitation provisions: Users should not be able to opt-in to discriminatory advertising or to the collection of data that would enable it.
- Give users very clear control over collection and sharing of their information.
- Restrict how companies are able to target users. The law should prohibit the use of third-party data to target specific individuals, as well as discriminatory advertising that violates users鈥 civil rights.
Require that platforms maintain a public ad database to ensure compliance with all privacy and civil rights laws when engaging in ad targeting: Pass the Honest Ads Act, expand the public ad database to include all advertisements, and allow regulators and researchers to audit it.
Require relevant disclosure and due diligence around the social and human rights impact of targeted advertising and algorithmic systems.
- Mandate disclosure of targeted advertising revenue along with disclosure of environmental, social, and governance (ESG) information, including information relevant to the social impact of targeted advertising and algorithmic systems.
- Require due diligence: Companies should be required to conduct assessments of their social impact and risks, including human rights risks associated with targeted advertising and algorithmic systems.
Strengthen corporate governance and oversight. The U.S. Securities and Exchange Commission (SEC) rules should empower shareholders to hold company leadership accountable for social impact.听
- Phase out dual-class shares: Companies should not be able to maintain a dual class system of shares that effectively empowers the CEO to vote down shareholder resolutions in perpetuity.
- Do not make it harder to file shareholder resolutions: The SEC should scrap proposed rule changes that will make it more difficult for shareholders to file proposals.
Introduction
As the pandemic death toll continued to rise in the spring of 2020, myriad myths and conspiracy theories circulated, including some dangerous falsehoods, about COVID-19 on Twitter and other social media: mobile 5G technology helps spread the virus; zinc, celluloid silver, miracle mineral solution, and garlic can cure the virus; hydroxychloroquine has a 100 percent success rate in treating the virus鈥攊ngesting bleach may help; and vaccines are being developed with microchip tracking technology funded by Bill Gates.1
Misinformation can be both polarizing and deadly. It is disturbing that social media companies failed to remove posts promoting the falsehoods described above, despite the companies鈥 commitments to remove COVID-19 misinformation. With so little known about the novel coronavirus and how to stop its spread, conspiracy theorists, hucksters, political opportunists, and would-be authoritarians worldwide have adopted tactics honed for recent political campaigns to further exploit social media鈥檚 algorithmic toolset, using platforms鈥 targeted advertising systems to amplify unfounded virus transmission theories and remedies.
Policymakers have proposed holding companies liable for speech rather than for their targeted advertising business models, the real source of the problem.
Social media platforms are part of a broader information ecosystem that also includes news organizations and other influential actors, including political leaders and celebrities. The growing impact of social media platforms on our information environment is fueled by advertising in general, and targeted advertising in particular. Fifty-five percent of American adults say they get their news from social media.2 That includes information about political candidates and policy issues from ads on Facebook and Google. In 2018, those two companies combined receive almost 60 percent of all digital advertising dollars in the United States.3 They are projected to earn 77 percent of the more than $1.34 billion that is expected to be spent on U.S. digital political advertising in the 2019鈥2020 election cycle.4 Before deciding to ban political advertising last fall, Twitter earned $3 million from political ads during the 2018 midterm elections.5 Social media platforms draw attention and advertising dollars away from news outlets whose journalism also includes critical facts without which citizens cannot make informed choices about how to live their lives鈥攐r to vote.
But digital platforms鈥 increasing share of news and advertising is only the beginning of the story. Since the 2016 U.S. presidential election, companies like Facebook, Google, and Twitter have struggled to curtail the circulation of both organic and paid false information on their platforms. The content moderation and takedown systems that social media companies have put in place to enforce their content rules are failing society as well as individual users: harmful misinformation continues to flow; while content moderation rules are enforced in inconsistent and often arbitrary ways, resulting in unintended deletion of content produced by activists and journalists, thereby restricting freedom of expression.6 Company systems intended to block coronavirus misinformation from ad platforms have been proven ineffective.
In parallel, policymakers have begun to address the problem of dangerous online content, particularly by formulating proposals to amend the extent to which companies are legally liable for user-generated speech on their platforms.7 But such proposals do not address what makes these platforms different鈥攁nd so potentially dangerous: their targeted advertising business models and the algorithmic systems that drive them.
Targeted advertising business models require extensive data collection and algorithmic content shaping in order to maximize targeted advertising revenue. Designed to prioritize sensational, eye-catching, and controversial content, they disproportionately amplify organic and paid speech that has great potential to corrupt the quality of information we need to promote not just healthy democracies and fair elections, but also, as we鈥檝e seen in the COVID-19 pandemic, public health. At the same time, targeted advertising enables paying customers to target different types of content, including ads, at specific audiences based on people鈥檚 demographics and declared interests as well as algorithmically inferred assumptions about other affinities and traits that may not even be correct.
Content moderation is necessary, but far from sufficient.
Against the backdrop of both a global pandemic and a U.S.-presidential election campaign, it is now incontrovertible that strengthening platform accountability and, thus, the integrity and resilience of our information ecosystem, is critical to the future of democracy. This report, which offers concrete U.S. policy recommendations, is the second in a two-part series examining how targeted advertising business models can drive the spread of misinformation8 and dangerous speech,9 and what U.S. lawmakers and regulators can do to hold companies accountable for these systems without infringing on human rights. We make the case for why and how policymakers and advocates should prioritize holding platforms accountable for mechanisms, policies, and practices that enable the amplification and targeting of user-generated misinformation and dangerous speech鈥攚ithout which the speech would have less reach and, thus, fewer negative effects鈥攊nstead of holding the companies liable for the speech itself.
Some policymakers are focused on breaking up these huge companies as the key to limiting social harms they can cause or contribute to. Antitrust is not a focus of this report, though encouraging competition, ensuring interoperability, and potentially breaking up monopolies are important parts of the policy toolkit that have the potential to alleviate some of the problems that are exacerbated by the major platforms鈥 enormous scale. But these regulatory interventions alone will not be sufficient. Checking the power of Big Tech will require a two-pronged approach outside the context of antitrust, and competition law more broadly, that aims to mitigate the harms posed by the underlying ad- and attention-driven business model that drives platform revenue. That approach involves: 1) a comprehensive and enforceable federal data privacy regulation, and 2) holding companies directly accountable by shining sunlight on the harms created by the business model and promoting policies that increase the fairness, accountability, and overall transparency of those practices.
Having outlined in part one the challenges to both human rights and civil liberties targeted advertising and algorithmic systems present, this second report focuses on what social media companies and, in particular, policymakers can do to address them. We borrow a metaphor from the oil industry to clarify our contention: We cannot clean up downstream pollutants like misinformation or dangerous speech without tackling the upstream processes鈥攖argeted advertising and algorithmic systems鈥攖hat make this speech so damaging to our information environment in the first place. Focusing on the downstream effects of the infodemic,10 as has been the approach thus far, does nothing to address its upstream structural causes.
We cannot clean up downstream pollutants like misinformation or dangerous speech without tackling upstream processes like targeted advertising and algorithmic systems.
In the first report, , we examined two overarching types of algorithms: 1) content-shaping algorithms that determine what individuals users see when they use a company鈥檚 online services (including those that target ads) and 2) content moderation algorithms that help human reviewers identify (and sometimes remove) content that violates the company鈥檚 rules.11 We then gave examples of how these technologies are used both to propagate and prohibit different forms of online speech (including targeted ads), and showed how they can cause or catalyze social harm, particularly in the context of the 2020 U.S. election. We described how users are profiled, segmented, and targeted in ways that allow advertisers to continually reinforce a specific message to a particular type of audience. We illustrated how, when this capability is combined with mis- or disinformation, such as incorrect voting information or outright lies about a candidate, for political gain, the results can be disastrous for democracy. 听
We also highlighted what we听don鈥檛听know about these systems. We called on companies to be much more transparent about how their content-shaping algorithms and content moderation systems work, and to give users more control over how content is being prioritized and promoted to them, or targeted at them. We explained why a regulatory focus on holding companies liable for content shared by users, on its own, will not succeed in stemming the spread of problematic content, and will likely result in the violation of users鈥 free expression rights. We agreed that content moderation is necessary, but far from sufficient, and we asserted that the first step in addressing the problem is to require much greater transparency and accountability around a business model that relies on algorithmic curation and exploitation of user data.
Identifying and removing misinformation and disinformation, and otherwise working to mitigate its impact by flagging it as false is an essential short-term measure. But as we pointed out in the previous report, it is important not to force companies to censor higher volumes of content across a broad range of topics, languages, and cultural contexts when their moderation systems lack the accuracy, consistency, and nuance to avoid violating users鈥 right to freedom of expression and information. But infodemics will keep plaguing us鈥攁nd may get worse鈥攗nless Congress acts, and also empowers other stakeholders including institutional investors to hold companies accountable.
Building on five years of research for the Ranking Digital Rights Corporate Accountability Index (RDR Index), which evaluates how transparent companies are about their policies and practices that affect online freedom of expression and privacy, this report reinforces the case for adopting a human rights framework for platform accountability and proposes two concrete areas where U.S. Congress needs to act to mitigate the harms of misinformation and other dangerous speech without compromising free expression: federal privacy law and corporate governance reform.
In August 2019, the Business Roundtable published a statement signed by 181 CEOs of the major U.S. corporations, announcing their commitment to the idea that the purpose of business is no longer only to serve shareholders, but also to 鈥渃reate value for all our stakeholders鈥 including employees, customers, and communities.12 It is no longer debatable whether businesses in any sector should be held accountable for their social impact.
The proliferation of misinformation during the COVID-19 pandemic has shown just how high the human cost鈥攁nd ultimately the economic cost鈥攃an be when companies prioritize shareholder returns over all else, and when the government fails to hold companies accountable to the public interest.13 Society is now paying the price for failing to require that companies make credible efforts to understand and track their social impact, and to take responsibility for preventing and mitigating social harms that their business may cause or contribute to. It is time to adjust course and design a resilient and equitable information environment鈥攖hrough increased transparency; responsive, evidence-based regulation; and persistent stakeholder engagement鈥攖hat protects human rights and civil liberties especially in times of crisis and change.
Citations
- Newsguard. 2020. Tracking Twitter鈥檚 COVID-19 Misinformation 鈥淪uper-Spreaders.鈥 (May 15, 2020).
- Khalid, Amrita. 2019. 鈥淎mericans Can鈥檛 Stop Relying on Social Media for Their News.鈥 Quartz. (May 15, 2020).
- Sterling, Greg. 2019. 鈥淎lmost 70% of Digital Ad Spending Going to Google, Facebook, Amazon, Says Analyst Firm.鈥 Marketing Land. (May 17, 2020).
- Gibson, Kate. 2020. 鈥淪pending on U.S. Digital Political Ads to Top $1 Billion for First Time.鈥 CBS News. (May 15, 2020).
- Conger, Kate. 2019. 鈥淭witter Will Ban All Political Ads, C.E.O. Jack Dorsey Says.鈥 The New York Times. (May 15, 2020).
- Gary, Jeff, and Ashkan Soltani. 2019. 鈥淔irst Things First: Online Advertising Practices and Their Effects on Platform Speech.鈥 Knight First Amendment Institute. (February 13, 2020).
- Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. (May 7, 2020).
- Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Strasbourg: Council of Europe. Council of Europe report.
- Dangerous Speech Project. 2016. 鈥淲hat Is Dangerous Speech?鈥 Dangerous Speech Project. (May 15, 2020).
- The World Health Organization (WHO) defines an infodemic as 鈥渁n over-abundance of information鈥攕ome accurate and some not鈥攖hat makes it hard for people to find trustworthy sources and reliable guidance when they need it.鈥 See World Health Organization. 2020. Novel Coronavirus (2019-NCoV) Situation Report – 13. Geneva: World Health Organization.
- Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. (May 7, 2020).
- Business Roundtable. 2019. 鈥淏usiness Roundtable Redefines the Purpose of a Corporation to Promote 鈥楢n Economy That Serves All Americans.鈥欌 (May 15, 2020).
- Goodman, Peter S. 2020. 鈥淏ig Business Pledged Gentler Capitalism. It鈥檚 Not Happening in a Pandemic. – The New York Times.鈥 The New York Times. (May 15, 2020).
Targeted Advertising and COVID-19 Misinformation: A Toxic Combination
In March, Facebook, Twitter, Google, YouTube, and others committed to join forces and work closely with networks of fact-checkers and researchers to combat the spread of pandemic-related misinformation.1 In April, as the U.S. economy slowed to a crawl and many state and local governments ordered people to stay home, Facebook took the further step of removing some posts that openly call on people to violate government orders, and committed to display warnings to people who had interacted with misinformation about COVID-19.2
Yet the torrent of misinformation continued, despite the best efforts of a growing number of organizations dedicated to external monitoring, flagging, and fact-checking. The platforms鈥 content moderation systems could not keep up, and also made mistakes.3 University of Oxford鈥檚 Reuters Institute, examining a sample of 225 pieces of misinformation posted between January and the end of March, found that while more than half of the content was either deleted or labeled with warnings, significant amounts of such content remained in circulation.4 Fifty-nine percent of posts flagged as false by fact-checkers remained on Twitter without any warning labels. Among the tweets that did get deleted was one by Rudy Giuliani, 国产视频 personal attorney and former mayor of New York City, quoting a well-known conservative activist making the false claim that 鈥渉ydroxychloroquine has been shown to have a 100% effective rate treating COVID-19.鈥5
YouTube for its part kept up 27 percent of content flagged as false, while 24 percent of posts containing content that fact-checkers had identified to be false remained on Facebook without warnings of any kind.6 But what is the actual prevalence of pandemic-related misinformation across the platforms? Facebook disclosed that in March it had displayed fact-checking labels on 40 million dangerous posts, based on 4,000 articles that its third-party reviewers had rated as false.7 So given the findings from the Oxford sample, we can conclude that millions of pieces of COVID-19 misinformation remained in circulation.
Also by March, fact-checking organizations reported being overwhelmed as the volume of potential misinformation related to the pandemic exploded.8 And when governments around the world started issuing mandatory stay-at-home orders to mitigate contagion, the platforms announced that since their human content moderators could no longer work in the office, and most could not work remotely because they lacked home internet access (along with privacy concerns), they would need to rely more heavily on content moderation algorithms to determine what needs to be deleted. They warned that this would likely result in more mistakes being made in determining what content does or does not violate the platform rules.9
The platforms have also failed to stem the flow of paid misinformation as well as deliberate disinformation. In mid-March, Consumer Reports Journalist Kaveh Waddell decided to test the effectiveness of Facebook鈥檚 commitment to police its ad platform more closely. He created a page for a fake organization, then submitted several advertisements to run on the platform, calling the coronavirus a hoax and urging people to get out of the house and mingle:
鈥淒on鈥檛 give in to the propaganda鈥攋ust live your life like you always have,鈥 stated one of the ads. Another instructed people to 鈥渟tay healthy with SMALL daily doses鈥 of bleach. All were approved by Facebook for publication. Waddell responsibly removed them before they went live.10
Facebook discloses a detailed system for reviewing all ads before they are published for compliance with its policies. Its ad policy states: 鈥淲e鈥檒l check your ad鈥檚 images, text, targeting, and positioning, in addition to the content on your ad鈥檚 landing page.鈥11 But the company discloses no further details about what processes or technologies are used to review these ads, including if and how automation is used or if there is any human involvement. Whatever those processes might be, Waddell鈥檚 experiment exposed just how ineffective Facebook鈥檚 ad content policing mechanisms actually are in life-and-death crisis situations鈥攕uch as a global pandemic when people are desperate to understand what is happening and face an overwhelming avalanche of often contradictory information.
Content moderation is a downstream effort by platforms to clean up the mess caused upstream by their own systems designed for automated amplification and audience targeting.
To try to mitigate the devastating effects of targeted misinformation, social media platforms are engaged in an endless circular fight, in which they now must detect and delete content whose social, political, and even medical impact is magnified by the opaque and unaccountable mechanisms of their own business models. Content moderation is a downstream effort by platforms to clean up the mess caused upstream by their own systems designed for automated amplification and audience targeting. These downstream efforts鈥攚hile necessary鈥攄o not fundamentally change the upstream systems that exacerbate the problem, and do not create greater transparency with the public about exactly how targeted systems are shaping what people can share, see, and know.
Take, for example, Facebook鈥檚 mechanisms used to build profiles on users and target them with specific content. In April, just a week after Facebook CEO Mark Zuckerberg pledged to fight misinformation about COVID-19, Aaron Sankin, a journalist with The Markup, noticed in his Facebook news feed an advertisement for 鈥渁 hat that would supposedly protect my head from cellphone radiation.鈥12
One of the many conspiracy theories raging across social media links the spread of the coronavirus to signals emanating from 5G wireless cell towers.13 After clicking the 鈥淲hy am I seeing this ad?鈥 tab on the ad, Sankin learned that the ad was targeting people interested in 鈥減seudoscience.鈥 We do not know how Facebook determined that Sankin was interested in pseudoscience, what types of advertisers used the audience category, or how often they did so. We do know from information provided to advertisers that the pseudoscience audience category contained 78 million users. With a few more clicks, Sankin was able to use that same category to 鈥渂oost鈥 Markup posts on Instagram (paying to increase how often the posts were shown to Instagram users in the target audience category). The fact that he was able to do so confirmed that Facebook had not disabled it despite its clear connection to misinformation. The audience category was only removed after Sankin brought it to the company鈥檚 attention.14
This was not the first time journalists have exposed dangerous audience categories, prompting a company apology and removal. In 2017, ProPublica found that racist audience categories were being offered to advertisers.15 Like those disturbing categories, the pseudoscience audience category was likely generated by an algorithmic system that used a host of data points, like users鈥 Facebook and Instagram posts, comments, and interactions with other posts as well as off-platform data about their personal characteristics and behaviors. On that basis, Facebook determined that these 78 million people were interested in pseudoscience and enabled advertisers to reach them. What鈥檚 more, in many cases the automated ad targeting systems also determined which paid content was most likely to appeal to pseudoscience enthusiasts. This appears to have been the case for the anti-cell phone radiation hat: the CEO of the company behind the ad told The Markup that his team hadn鈥檛 chosen the pseudoscience category.16
Sankin鈥檚 experiment suggests that in addition to classifying individual users as 鈥渋nterested in pseudoscience,鈥 Facebook鈥檚 algorithmic systems are also able to determine that specific ads contain pseudoscience. Rather than flagging such ads for further scrutiny by a human reviewer, the company鈥檚 algorithmic systems make it easier for such ads to reach users who are vulnerable to their messages. A hat that purports to block cell phone radiation may not be likely to cause much harm, but seeing an ad for one reinforces the worldview that creates demands for such products. The same ad-targeting technology also spreads dangerous hoaxes like the idea that consuming bleach protects against the novel coronavirus or that childhood immunizations cause autism, as well as posts that incite hate or violence.
The problem with targeted advertising is that you can鈥檛 put the cat back in the bag.
The problem with targeted advertising is that you can鈥檛 put the cat back in the bag. A clear example can be seen in how misinformation radiated from Facebook pages protesting the pandemic lockdown meant to slow the spread of the coronavirus.
In mid-April protesters in state capitols across the country wielded signs with slogans like 鈥#endtheshutdown鈥 and 鈥済ive me liberty or give me COVID-19!,鈥 calling for governors to lift their stay-at-home orders. The protesters were driven by different motives, from economic anxiety at a time of unprecedented unemployment to long-standing fears about government overreach.
Anti-quarantine protest participants may have seen themselves as part of a spontaneous grassroots movement. But the coordinated marketing and messaging around the Facebook pages used to organize the protests was supported by organizations with long histories and deep pockets. Thanks to dogged reporting by several news organizations, it became clear that the emergence and expansion of these protests were made possible in no small part to the support of politically influential, well-funded backers including the National Rifle Association.17
According to research by First Draft, an organization that tracks the spread and analyzes the causes of disinformation, 100 state-specific Facebook pages were created in April to protest state governments鈥 stay-at-home orders. NBC reported that as of April 20, these protest pages were used to organize at least 49 different events. The groups, many with names like 鈥淲isconsinites Against Excessive Quarantine鈥 and 鈥淩eopen Minnesota鈥 repeated across Facebook with different state names, had by April 20 attracted more than 900,000 members.18 These groups and their members were also reported to be active spreaders of coronavirus misinformation, much of it coordinated. Researchers at Carnegie Mellon University鈥檚 CyLab Security and Privacy Institute tracked nearly identical claims posted across multiple platforms, from the Facebook groups to Twitter and Reddit.19
We cannot rely on a game of whack-a-mole to protect public health鈥攁nd ultimately the health of our democracy.
Frustratingly, it is impossible to document with any precision the scale at which COVID-19 misinformation circulating on social media platforms has been boosted by targeted advertising tools, as the companies keep that information to themselves. In general, the platforms do not disclose nearly enough information about whether the content users see on the platform was boosted, by whom, and whether they are being targeted by tools like Facebook鈥檚 鈥渃ustom audiences,鈥 which enable advertisers to target specific individuals.
But we do know that a large volume of that misinformation is being shared by followers and administrators of pages that can easily afford to pay for targeted advertising, and that many of these pages are run by people with plenty of experience in using such tools.
Disinformation, misinformation, hate speech, and scams of all sorts are powerful precisely because digital platforms鈥 automated content optimization systems aim them at just the people who are most vulnerable to these messages, while hiding them from other users who would otherwise be in a position to flag them and provide corrective counter-speech. False information about COVID-19, and other dangerous content, would be much less effective if it was not algorithmically targeted and amplified. It is clear that even if social media companies鈥 content rules were perfect鈥攚hich they are not鈥攅nforcing them fairly and accurately at a global scale is not possible.
Banning dangerous content itself is both contrary to free expression standards and impossible to achieve at a global scale. But we can stymie its reach by denying targeting and optimization algorithms their power, through fundamental reforms grounded in human rights principles.
Citations
- Douek, Evelyn. 2020. 鈥淐OVID-19 and Social Media Content Moderation.鈥 Lawfare. (May 15, 2020).
- Romm, Tony. 2020. 鈥淔acebook Will Alert People Who Have Interacted with Coronavirus 鈥楳isinformation.鈥欌 Washington Post. (May 15, 2020).
- Heilweil, Rebecca. 2020. 鈥淔acebook Is Flagging Some Coronavirus News Posts as Spam.鈥 Vox. (May 15, 2020).
- Brennen, J. Scott, Felix Simon, Philip N. Howard, and Rasmus Kleis Nielsen. 2020. Types, Sources, and Claims of COVID-19 Misinformation. Oxford: University of Oxford. (May 15, 2020).
- Wong, Queenie. 2020. 鈥淭witter Leaves up 国产视频 鈥榣iberate鈥 Tweets about States with Lockdown Protests.鈥 CNET. (May 15, 2020).
- Brennen, J. Scott, Felix Simon, Philip N. Howard, and Rasmus Kleis Nielsen. 2020. Types, Sources, and Claims of COVID-19 Misinformation. Oxford: University of Oxford. (May 15, 2020).
- Rosen, Guy. 2020. 鈥淎n Update on Our Work to Keep People Informed and Limit Misinformation 国产视频 COVID-19.鈥 Facebook Newsroom. (May 7, 2020).Romm, Tony. 2020. 鈥淔acebook Will Alert People Who Have Interacted with Coronavirus 鈥楳isinformation.鈥欌 Washington Post. (May 15, 2020).
- Leskin, Paige. 2020. 鈥淥ne of the Internet鈥檚 Oldest Fact-Checking Organizations Is Overwhelmed by Coronavirus Misinformation 鈥 and It Could Have Deadly Consequences.鈥 Business Insider. (May 15, 2020).
- Bergen, Mark, Joshua Brustein, and Sarah Frier. 2020. 鈥淭ech鈥檚 Shadow Workforce Sidelined, Leaving Social Media to the Machines.鈥 Bloomberg. (May 15, 2020).
- Waddell, Kaveh. 2020. 鈥淔acebook Approved Ads With Coronavirus Misinformation.鈥 Consumer Reports. (May 15, 2020).
- Facebook. n.d. 鈥淎dvertising Policies.鈥 (May 15, 2020).
- Sankin, Aaron. 2020. 鈥淲ant to Find a Misinformed Public? Facebook鈥檚 Already Done It.鈥 The Markup. (May 7, 2020).
- Satariano, Adam, and Davey Alba. 2020. 鈥淏urning Cell Towers, Out of Baseless Fear They Spread the Virus.鈥 The New York Times. (May 15, 2020).
- Sankin, Aaron. 2020. 鈥淲ant to Find a Misinformed Public? Facebook鈥檚 Already Done It.鈥 The Markup. (May 7, 2020).
- Julia Angwin, Madeleine Varner. 2017. 鈥淔acebook Enabled Advertisers to Reach 鈥楯ew Haters.鈥欌 ProPublica. (February 12, 2020).
- Sankin, Aaron. 2020. 鈥淲ant to Find a Misinformed Public? Facebook鈥檚 Already Done It.鈥 The Markup. (May 7, 2020).
-
Bixby, Scott. 2020. 鈥淒eVos Has Deep Ties to Michigan Protest Group, But Is Quiet On Tactics.鈥 The Daily Beast. (May 5, 2020);
Stanley-Becker, Isaac, and Rony Romm. 2020. 鈥淧ro-Gun Activists Using Facebook Groups to Push Anti-Quarantine Protests.鈥 Washington Post. (May 15, 2020). - Zadrozny, Brandy, and Ben Collins. 2020. 鈥淐onservative Activist Family behind 鈥楪rassroots鈥 Anti-Quarantine Facebook Events.鈥 NBC News. (May 15, 2020).
- Seitz, Amanda. 2020. 鈥淰irus Misinformation Flourishes in Online Protest Groups.鈥 Associated Press. (May 15, 2020).
Human Rights: Our Best Toolbox for Platform Accountability
Viewed through the lens of human rights standards, the COVID-19 infodemic is exacerbated by social media platforms鈥 failure to align the full range of their business operations with a commitment to human rights. Over time, their surveillance-based business models can corrode a society鈥檚 information environment, poisoning the flow of information on which deliberative democracy depends as well as creating the conditions for human rights violations, or worse.
As globally dominant social media platforms, Facebook, Google and Twitter have also played a positive role in advancing global human rights, which is important to acknowledge: They enable free expression and the global free flow of information by providing opportunities for a wide range of speech, about politics, health, and practically anything else. Laudably, they have taken significant, if uneven and imperfect, steps to shield users around the world from acts of government censorship and surveillance that violate human rights.1 Google and Facebook are both members of the Global Network Initiative (GNI), a multi-stakeholder organization that works with information and communications technology companies to protect users鈥 rights when they receive government demands to delete content, restrict access to service, and provide access to user information that violate international human rights standards for freedom of expression and privacy.2
Yet free speech clearly has a dark side: misinformation can be deadly. Given this contradiction, there is little wonder that these three companies have found themselves at the vortex of heated debates about online speech and democracy.
International human rights standards, grounded in the Universal Declaration of Human Rights (UDHR) as well as the U.S.-ratified International Covenant of Civil and Political Rights, offer a framework for companies to protect and respect the rights of users and communities in a manner that addresses key gaps in U.S. law. The U.S. Constitution is designed primarily to protect people from government abuse: The First and Fourth Amendments forbid government censorship and unlawful search and seizure. But when it comes to how companies鈥 own business decisions affect individuals and society, U.S. law has largely left them off the hook. The law allows commercial social media platforms to moderate and take down content according to their own self-determined rules.3 In essence, U.S. platforms have the right to restrict free expression and shape users鈥 access to information on their platforms without public accountability: The companies can formulate their private rules through an opaque process, change them frequently, and present them in a way that is hard for many users to understand let alone abide by.
U.S. law has largely left companies off the hook when it comes to how their business decisions affect individuals and society.
Congress has over the past century passed many laws that forbid a vast range of abusive, exploitative, or discriminatory corporate behavior. But the question of how to regulate social media has been both difficult and contentious, given the technological, political, and constitutional complexities around anything having to do with speech鈥攐r framed as such. In our first report in this two-part series, we argued that changing the law to hold companies liable for content shared by users is not the answer, and due to technical realities that content moderation ultimately clashes with freedom of expression. The problems of private content moderation are compounded by amplification and targeting systems that shape the flow of information to users based on their personal traits or political beliefs. Yet at the same time, the three social media giants have failed to fully respect and protect users鈥 expression and information rights as articulated by international human rights standards. That failure has implications for other rights, including the rights to privacy, nondiscrimination, assembly and association, and economic, social, and cultural rights.
Buttressed by emerging accountability frameworks driven by institutional investors and civil society advocates who are pushing governments to adopt them, international human rights standards apply to companies as well as governments. They offer a corporate accountability toolbox that can be used by policymakers, institutional investors, and other affected stakeholders in any country. Yet, this toolbox has so far been largely overlooked by policymakers seeking to hold companies accountable for their social impact in the United States.
The international human rights toolbox has so far been largely overlooked by U.S. policymakers seeking to hold companies accountable for their social impact.
For social media platforms, the fundamental rights to free expression (UDHR Article 19) and privacy (UDHR Article 12) must be protected and respected so that people can use technology effectively to exercise and defend other political, religious, economic, and social rights. The pandemic further underscores how violations of free expression and information rights can cause or contribute to the violation of other rights, such as right to life, liberty, and security of person (UDHR Article 3). Similarly, violation of the right to privacy can also set off a chain reaction for the violation of other rights, including the human right to non-discrimination (UDHR Article 7, Article 23); freedom of thought (UDHR Article 18); freedom of association (UDHR Article 20); and the right to take part in the government of one鈥檚 country, directly or through freely chosen representatives (UDHR Article 21).4
The UN Guiding Principles on Business and Human Rights, approved in 2011 and co-sponsored by the U.S. government, has become the gold standard of 鈥済uidelines for States and companies to prevent, address and remedy human rights abuses committed in business operations.鈥5 For the tech sector, applying the principles means respecting users鈥 privacy and freedom of expression, and all other human rights that their business operations may have an impact on鈥攂oth online and offline. Respecting users鈥 freedom of expression does not preclude a private company establishing rules and moderation processes, or even editorial guidelines related to the purpose and scope of the service (for example, if it is meant to serve a specific community or purpose, such as a platform for members of a given profession). But to be effective, human rights standards must be 鈥渋mplemented transparently and consistently with meaningful user and civil society input,鈥 and must be accompanied by industry-wide oversight and accountability mechanisms.6
Ranking Digital Rights (RDR) offers a dynamic and regularly-updated instruction manual for what steps companies can take today鈥攅ven in absence of clear regulation鈥攖o improve their respect for human rights. Since 2015, the RDR Index has evaluated the world鈥檚 most powerful digital platforms and telecommunications companies according to their disclosed commitments to respect users鈥 human rights.7 Grounded in the UDHR and the UN Guiding Principles, the RDR Index methodology comprises more than three dozen indicators in three categories: governance, freedom of expression, and privacy. For 2020 we have upgraded the RDR Index methodology, placing greater focus on rights like non-discrimination and the right to life, liberty, and security of person.8
The Impact of Targeted Advertising and Algorithmic Systems on Human Rights
In our previous report, we described the growing use of algorithms to moderate content even before the coronavirus outbreak, and how these algorithms make frequent errors that lead to the deletion of journalism, activism, and other speech that does not actually violate the platform rules. We also pointed to data from the past four iterations of the RDR Corporate Accountability Index. Only since 2018 have companies started to disclose any data about the volume and nature of content being removed for violating their rules, known variously as terms of service or community guidelines.9 But their disclosures about content moderation continue to fall very short of what free expression advocates and academic researchers believe is the minimum baseline standard for transparency and accountability as private arbiters of global online speech.10
In late 2019 and early 2020, our research team examined how transparent Facebook, Twitter, and YouTube are about how their automated algorithmic systems determine what users can post and share, what gets removed or blocked for violating the rules, and what information is shown to them most prominently through news feeds and recommendations. As we described in our last report, while Facebook, Google, and Twitter do not hide the fact that they use algorithms to shape content, they disclose little about how the algorithms actually work, what factors influence them, and how users can customize their own experiences.11
Pressure for change is growing with the public health crisis. Concerned that this opacity about what happens to content on platforms could be especially dangerous amidst heated public debates about how best to fight the pandemic without destroying peoples鈥 livelihoods, 75 organizations and researchers released an open letter calling on platforms to preserve information 鈥渁bout what their systems are automatically blocking and taking down.鈥澨齏ithout such information, according to the nonprofit Center for Democracy and Technology which helped draft the letter, 鈥渋t will be hard to assess the efficacy of efforts to share vital public health information while combating the spread of coronavirus scams and pandemic profiteering.鈥12
To ensure that social media platforms do not contribute to or enable the violation of users鈥 rights, laws and regulations must strictly limit how users can be targeted.
Yet greater transparency alone will not address fundamental problems related to the targeted advertising business model. Companies have experimented with controlling, or being more transparent about, advertisements placed by or in support of political candidates in elections. Facebook now publishes a library of political ads鈥攚ith a serious flaw in that its political-ad archive depends on political advertisers complying with labeling requirements, and the platforms detecting those that do not. 鈥淐urrently, dishonest companies can spend an unlimited amount of undeclared money in favor of a political agenda through the Facebook ads platform,鈥 argue the authors of one recent academic paper. Nor does the Facebook Ad Library disclose information about how these advertisers may have deployed targeted advertising tools, or what types of audience characteristics were targeted.13
Algorithmic targeting systems only work because the companies that profit from them have access to unfathomable amounts of user data. Google (which owns YouTube), Facebook, Twitter, and other companies whose business models rely on targeted advertising have every incentive to hoover up every crumb of data they can access, and to create more opportunities to surveil our daily behavior. Thanks to complex automated processes that frequently aren鈥檛 fully understood even by their own creators, digital platforms can identify the individuals who are most likely to make a purchase, be convinced by a political message, or be susceptible to various types of misinformation. Such manipulation is a form of discrimination, in addition to being a clear violation of freedom of opinion and of information, particularly in the context of elections.14
In our first report in this series, we outlined the dangers to democracy when targeted advertising is manipulated for political gain during elections. Election experts have raised concerns that the same technologies and tactics will also be used to spread disinformation about the voting process.15 Both user-declared and algorithmically-deduced political leanings can be exploited to target voters with paid, deliberate disinformation about polling places and times or equally damaging but unverifiable claims about an opponent鈥檚 character or the effects of their policy proposals. This practice, which relies on the processing of vast amounts of user information without explicit consent, violates the right to non-discrimination by determining what information a user sees based on disclosed or assumed protected traits, such as race, ethnicity, age, gender identity and expression, sexual orientation, health, disability, etc.16
But the violation of individual users鈥 rights is not the only harm done. Company practices, incentivized by targeted advertising business models, can also contribute to the violation of the rights of entire communities or categories of people. For example, catering to advertisers鈥 desire to reach potential job applicants who are demographically similar to their current workforce leads digital platforms to enable their advertiser clients to illegally target job ads by gender,17 race,18 ethnicity,19 and other protected attributes.20
The human rights risks associated with targeted advertising are clear. To ensure that social media platforms do not contribute to or enable the violation of users鈥 rights, laws and regulations must strictly limit how users can be targeted. They must also require much greater transparency about the nature and design of platforms鈥 algorithmic systems, processes, and business models.
Citations
- Crocker, Andrew et al. 2019. Who Has Your Back? Censorship Edition 2019. Electronic Frontier Foundation. (May 16, 2020).
- Global Network Initiative. 2020. 鈥淕lobal Network Initiative.鈥 Global Network Initiative. (May 16, 2020).
- Specifically by Section 230 of the 1996 Communications Decency Act. For more, see Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. (May 7, 2020).
- Ranking Digital Rights. 2019. Human Rights Risk Scenarios: Targeted Advertising, Consultation Draft. Washington, D.C.: 国产视频.
- OHCHR. 2011. Guiding Principles on Business and Human Rights: Implementing the United Nations 鈥淩espect, Protect and Remedy鈥 Framework. Geneva: United Nations Office of the High Commissioner on Human Rights.
- U.N. Human Rights Council. 2018. Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. A/HRC/38/35, 14. Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports.
- Ranking Digital Rights. 2019. Corporate Accountability Index. Washington, D.C.: 国产视频.
- Ranking Digital Rights. 2020. 2020 Ranking Digital Rights Corporate Accountability Index Draft Indicators. Washington, DC: 国产视频.
- Frenkel, Sheera. 2018. 鈥淔acebook Says It Deleted 865 Million Posts, Mostly Spam.鈥 The New York Times. (May 16, 2020).
- Singh, Spandana. 2019. Assessing YouTube, Facebook and Twitter鈥檚 Content Takedown Policies. Washington, D.C.: 国产视频. (May 16, 2020).
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington, D.C.: 国产视频. www.rankingdigitalrights/pilot-report-2020
- Llans贸, Emma J. 2020. 鈥淯nderstanding Automation and the Coronavirus Infodemic: What Data Is Missing?鈥 Center for Democracy and Technology. (May 16, 2020).Radsch, Courtney J. 2020. 鈥淐PJ, Partners Call on Social Media and Content Sharing Platforms to Preserve Data.鈥 (May 16, 2020).
- Silva, M谩rcio et al. 2020. 鈥淔acebook Ads Monitor: An Independent Auditing System for Political Ads on Facebook.鈥 In Proceedings of The Web Conference 2020, Taipei, Taiwan: ACM, 224鈥34. (May 16, 2020).
- Information collected for targeted advertising purposes enables companies and advertisers to segment audiences in a very granular manner, tailoring messages to very specific attributes including preferences, habits, or traits. As this data is shared across the targeted advertising ecosystem, this in turn enables discrimination against internet users on the basis of protected traits and even the targeting of specific individuals. For more on the relationship between targeted advertising business models and discrimination, see Ranking Digital Rights. 2019. Human Rights Risk Scenarios: Targeted Advertising, Consultation Draft. Washington, D.C.: 国产视频.
- Hasen, Richard L. 2020. 鈥淲hat Happens in November If One Side Doesn鈥檛 Accept the Election Results?鈥 Slate. (May 16, 2020).
- U.N. General Assembly. (1948). Universal Declaration of Human Rights (217 [III] A), Article 2. Paris.
- Tobin, Ariana, and Jeremy B. Merrill. 2018. 鈥淔acebook Is Letting Job Advertisers Target Only Men.鈥 ProPublica. (May 11, 2020).
- Angwin, Julia, and Terry Parris Jr. 2016. 鈥淔acebook Lets Advertisers Exclude Users by Race.鈥 ProPublica. (May 11, 2020).
- Tobin, Ariana. 2018. 鈥淔acebook Promises to Bar Advertisers From Targeting Ads by Race Or鈥.鈥 ProPublica. (May 11, 2020).
- Ranking Digital Rights. 2019. Human Rights Risk Scenarios: Targeted Advertising, Consultation Draft. Washington, D.C.: 国产视频.
Making All Ads 鈥淗onest鈥 Through Transparency, Limited Targeting, and Enforcement
The targeted advertising debate in the United States has focused on the issue of online political advertising, particularly platforms鈥 responsibility to police the veracity of politicians鈥 claims, but this narrow focus is misguided, as are the proposed solutions. Aside from the inevitable First Amendment challenges that would arise from any government attempt to force platforms to fact-check political speech, it is doubtful that they would be able to do so accurately and fairly, especially at the scale at which they operate.1
Other rules that govern broadcast and print advertising by political candidates and campaigns do not currently apply online, leaving platforms in the position of setting and enforcing the norms for a growing portion of American electoral discourse. The proposed Honest Ads Act, first introduced in Congress in 2017, would require online advertising platforms to maintain a "public file" database with detailed information about the political ads they serve鈥攕imilar to existing requirements for broadcast media鈥攊ncluding who paid for each ad and how much it cost. Critically, the Honest Ads Act goes beyond platforms鈥 current voluntary disclosures by including targeting information.2 The bill is unlikely to advance before the next election. For the time being, online political advertising is entirely unregulated, leaving platforms to set their own rules.
While Twitter and Google rolled out bans on targeting users by political affiliation in late 2019, Facebook in early January 2020 clarified that it would not follow suit, instead announcing changes that would make its political ad library easier to navigate and granting users somewhat more control over how many political ads they see鈥攁lbeit on an opt-out basis.3 As noted above, these voluntary ad libraries do not include targeting information, a key requirement of the proposed Honest Ads Act.4
Thus, as of May 2020, it remains possible to target users with political advertising on Facebook (but not on Twitter or any of Google鈥檚 platforms, including YouTube) in three powerful ways. First, individuals can be targeted if advertisers upload custom lists from off-platform data, including voter registration rolls, donor lists to parties and candidates, lists of subscribers to emails from parties and candidates, etc.5 Second, platforms can algorithmically generate new audiences that are similar to an uploaded custom list in terms of demographics, interests, and other data points. And third, advertisers can select which audience categories they want to reach, based on audience categories and profiles that Facebook has created from people鈥檚 online and offline activities, such as the content they post, the accounts and pages they follow, the content they like or otherwise engage with, their credit card purchases, and the known and/or inferred political affiliation of the other users they are connected to on the platform.
All three tactics allow political advertisers to send different messages and even contradictory ones to different groups of users. In both 2016 and 2018, African American internet users (African Americans vote overwhelmingly Democratic)6 in swing states were targeted with ads designed to suppress voter turnout, whether by providing false information about when, where, and how to vote or by flooding their newsfeeds with negative ads about the advertiser鈥檚 opponent.7 The platforms now prohibit ads that mislead voters about the voting process, but it鈥檚 unclear how effectively this policy is enforced.
While enacting the Honest Ads Act and limiting targeting for political ads are important and urgent steps, Congress must go beyond the narrow definition of 鈥渆lectioneering communications鈥 to safeguard democracy from algorithmic manipulation by both foreign and domestic actors.8 Currently, political advertisers self-identify as such, and there is no way of knowing how many neglect to do so and consequently are not included in the existing ad libraries. Any ad database as required by the Honest Ads Act, would run into the same enforcement problem. Moreover, drawing a clear line around political and issue ads is a vexing conundrum, as the controversy surrounding Twitter鈥檚 October 2019 decision to ban all political ads demonstrates. CEO Jack Dorsey鈥檚 initial announcement9 indicated that issue ads, including those from public-interest organizations, would be covered by the ban, but the final policy issued a few weeks later was more narrow.10 No matter where the line is drawn, any database of political ads, as the Honest Ads Act would require, would miss a significant number of politically relevant paid messages, just as the current voluntary ad libraries do. This alone is reason enough to expand the advertising transparency requirement to all online ads.
In addition to these definitional and enforcement issues, the current infodemic has made it clear that ads need not be related to elections to raise serious concerns. Online advertising transparency requirements must go beyond the scope of the Honest Ads Act and include non-political ads as well. This will allow regulators as well as independent researchers to verify that platforms, and their advertisers, are complying with applicable laws and the platforms鈥 own rules. Such transparency would further enable credible, empirical research on the state of online advertising and its impact on political life, public health, and other important issues. Moreover, it would provide crucial oversight over platforms鈥 respect for the law (notably the Civil Rights Act, Fair Housing Act, and various public accommodation laws) and enforcement of their own rules.
As problematic as the rules themselves are, uneven enforcement鈥攂y the company of its own rules鈥攅xacerbates the extent to which people are being targeted and manipulated in ways that clearly violate users鈥 information and non-discrimination rights. Neither Google, Facebook, nor Twitter include data about their ad policy enforcement in their transparency reports that disclose data related to the moderation of user-generated content. Recent reporting, including the aforementioned Consumer Reports experiment with ads containing clear coronavirus misinformation, demonstrates the inadequacy of Facebook鈥檚 ad review process. While investigative journalists have not exposed similar failures in Google鈥檚 or Twitter鈥檚 ad review systems, this should not be taken as evidence that these systems function well. More oversight is sorely needed, and a universal database for all online advertising would be a key step in that direction.
Citations
- For a discussion of this point, see Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. (May 7, 2020).
- Honest Ads Act, S. 1989, 115th Congress, 2017.
- Associated Press. 2020. 鈥淔acebook Refuses to Restrict Untruthful Political Ads and Micro-Targeting.鈥 The Guardian. (May 16, 2020).
- However, Reddit鈥檚 recently introduced Political Ads Transparency Community does include targeting information. See Singh, Spandana. 2020. 鈥淩eddit鈥檚 Intriguing Approach to Political Advertising Transparency.鈥 Slate. (May 18, 2020).
- The same data is used for direct mail, get-out-the-vote door-knocking, phone canvassing, and email blasts. See National Conference of State Legislatures. 2019. 鈥淎ccess To and Use Of Voter Registration Lists.鈥 (May 17, 2020).
- Laird, Chryl, and Ismael White. 2020. 鈥淲hy So Many Black Voters Are Democrats, Even When They Aren鈥檛 Liberal.鈥 FiveThirtyEight. (May 16, 2020).
- Romm, Tony. 2018. 鈥淗ow Facebook and Twitter Are Rushing to Stop Voter Suppression Online for the Midterm Elections.鈥 Washington Post. (May 17, 2020).
- The Federal Election Commission defines 鈥渆lectioneering communications鈥 as 鈥減ersons, groups of persons or organizations, including corporations and labor organizations, may make electioneering communications. See Federal Election Commission. n.d. 鈥淓lectioneering Communications.鈥 FEC.gov. (May 16, 2020).
- Jack Dorsey (@jack). 2019. 鈥溾榃e鈥檝e Made the Decision to Stop All Political Advertising on Twitter Globally. We Believe Political Message Reach Should Be Earned, Not Bought. Why? A Few Reasons鈥︹ / Twitter.鈥 (May 16, 2020).
- Twitter. n.d. 鈥淧olitical Content.鈥 (May 16, 2020).
By Protecting Data, Federal Privacy Law Can Reduce Algorithmic Targeting and the Spread of Disinformation
A strong data privacy law, effectively enforced, can protect internet users from the discrimination inherent to automated content optimization and limit the viral spread of harmful messages. The way to achieve that is by strictly limiting data collection and retention to the absolute minimum that is required to deliver the service to the end-user, irrespective of the company鈥檚 business model or financial interests. As Alex Campbell wrote in Just Security: 鈥淎bsent the detailed data on users鈥 political beliefs, age, location, and gender that currently guide ads and suggested content, disinformation has a higher chance of being lost in the noise.鈥1
The United States lacks a comprehensive federal privacy law governing the collection, processing, and retention of personal information, though there are sector-specific laws that apply to education, healthcare, and other sectors.2 A strong federal privacy law, backed up by robust enforcement mechanisms, is perhaps the strongest tool at Congress鈥 disposal to stem the tide of online misinformation and dangerous speech by disrupting the algorithmic systems that amplify such content. This approach would also have the benefit of side-stepping the thornier issues related to free speech and the First Amendment.
Even when these companies do disclose what they collect, the scope of what is collected is staggering.
But as things stand, targeted advertising companies are free to collect virtually any information they want to, and use it however it benefits their bottom line. Facebook, Google, and Twitter hoover up massive amounts of data about internet users (both on their platforms and off). Indeed, not only do platforms track what users do while using their services, they also follow them around the internet and purchase data about the offline behavior from credit card companies and data brokers.3 The data that is collected becomes the core ingredient for developing very powerful digital profiles about users that can then be used by advertisers and political operatives to target groups and individuals, like in the pseudoscience example described previously. What鈥檚 worse, the tech giants do not clearly disclose exactly what they are doing with users鈥 data. In such conditions, the notion of user consent is meaningless.
RDR鈥檚 evaluation of the three American social media giants鈥 policies and disclosed practices is useful in clarifying just what needs to change, and how the companies should be regulated. Data from the 2019 RDR Index highlights the opacity of the major U.S. digital platforms when it comes to the collection, processing, and sharing of user information. Even when these companies do disclose what they collect, the scope of what is collected is staggering.
Scope and Purpose of Data Collection, Use, and Sharing
In evaluating companies for the RDR Index, we have examined if companies clearly disclose why and how they collect user information, by which we mean any data that is connected to an identifiable person, or may be connected to such a person by combining datasets or utilizing data-mining techniques. We look for a commitment to limit collection of user information to what is directly relevant and necessary to accomplish the purpose of the service from the end user鈥檚 perspective.4 In our evaluation, we do not consider targeted advertising as the purpose of the service: While the revenue it generates enables the company to provide the service, users would get the same benefit if the company made money differently. User information should not be collected for targeted advertising by default, though people can be given a choice to opt in.
Google, Facebook, and Twitter all track users across the internet: Using 鈥渃ookies鈥 and other tracking technologies embedded in many websites, they collect data about what web pages people visited and what they have done there (purchased items, watched videos, etc).5 The three companies all disclose and explain what types of user information they collect but are less clear about how, and none commit to collecting only data that is necessary to provide the service.6 Facebook, for instance, states that it collects 鈥渢hings you do and others do and provide,鈥 including the 鈥渃ontent, communications and other information鈥 users provide when using Facebook products (for example when signing up for an account, posting text, videos or images, and using messaging functions). This can include the content of user posts, metadata, and more. Facebook also discloses that it collects device information, including location information. In short: If Facebook can capture a piece of information about you, it does.7
All three companies disclose some information about the types of entities they share user data with, but none get more specific. Yet in the context of targeted advertising, it is especially important for companies to disclose to users specifically who their information is shared with for any purpose that isn鈥檛 subject to legitimate law enforcement or national security limitations in accordance with human rights standards.8
While no company discloses enough about how they handle user information, Twitter deserves credit for having disclosed more than other U.S. platforms (Google, Facebook, Microsoft, and Apple) about its data handling policies, across all the user information indicators in the 2019 RDR Index.9
Federal privacy legislation should include strong data minimization and purpose limitation provisions: Users should not be able to opt in to discriminatory advertising or to the collection of data that would enable it.10 Any data processing that remains should be opt in. Using the service should not be contingent on giving up more data than that which is necessary to accomplish the purpose of the service. Crucially, the delivery of targeted advertising should not be considered the purpose of the service unless the service鈥檚 primary purpose is in fact clearly described as such by the company to the general public in its marketing and public communications. Companies should disclose to users and to the relevant regulatory agency what user information they collect, share, retain, and infer; for what purpose; and how long it is retained. Companies should only collect user information from third parties, or share user information with third parties, if the two companies have a vendor-contractor relationship and the sharing of this user information is directly relevant and necessary for the purpose of delivering a service to the user. Companies should allow users to obtain all of their user information (collected and inferred, broadly defined) that the company holds, in a structured data format. They should delete all user information after a user terminates their account or at the user鈥檚 request. Finally, the accuracy of disclosures and compliance with the above requirements should be independently audited.
Inferred Information and Targeting
In future versions of the RDR Index we will look for the same level of disclosure and commitment about what companies infer about users. Inference is a key way that user profiles are built. Companies perform big data analytics on their troves of collected user data in order to make predictions about the behaviors, preferences, and private lives of their users, and to then sort users into audience categories on that basis.11 Take, for example, the case of Aaron Sankin at The Markup. He doesn鈥檛 know exactly why his account was included in the pseudoscience audience category, but he speculates it was likely because he had conducted research about medical misinformation on Facebook, causing the company鈥檚 algorithms to assume he had an interest in pseudoscience.12
Targeted advertising should be allowed only if the default is that users are not targeted upon joining a service. Companies would be wise to avoid relying on targeted advertising as their sole source of revenue and consider contextual advertising and subscription models, among others. Users must be able to actively opt in to being targeted and able to fully control what information can be used to target them, if any.13 Users might choose to be targeted based on certain types of data but not others. Companies should disclose sufficient information to users and to regulators so that people can understand exactly how, why, and by whom they have been targeted, and regulators can track broader patterns to identify abusive practices.
In the first report in this series, we called on companies to publicly explain the content-shaping algorithms that determine what user-generated content users see, and the ad-targeting systems that determine who can pay to influence them. We also called on them to disclose their rules for user content, advertising content, and ad targeting; to explain how they enforce those rules; and to publish regular transparency reports containing data about the actions they take to enforce these rules. We further call on the U.S. Congress to enact legislation to require companies to provide this information to policymakers and the public as a first step toward greater accountability.14 These transparency measures are necessary for a more nuanced understanding of a complex and dysfunctional ecosystem but insufficient to the task of making our online ecosystem work for democracy and the public interest. For that, much more active regulatory intervention is needed, starting with the barely regulated targeted advertising industry.
Policymakers should not be convinced by tech giants鈥 claim that targeted advertising benefits internet users by showing them ads that are most relevant to them. Targeted advertising is on by default on Facebook, Google, and Twitter. If users find targeted advertising as useful as companies say they do, many will choose to opt in. While users can customize options for the types of ads they want to see, they can鈥檛 opt out of receiving tailored ads altogether. In the past year, Facebook has improved its disclosure of options users have to remove categories of interests and pages they鈥檝e visited, constituting some but not all of the information used to customize the ads they are seeing. Even when customization is an option, however, the settings can be hard to find.
A federal privacy law should also restrict the targeting options that platforms are allowed to offer to advertisers. Based on our research team鈥檚 examination of company policy disclosures across Facebook, Google, and Twitter, Facebook appears to have the fewest restrictions on ad targeting and the least transparency. It disclosed that users will be targeted with ads but did not disclose the exact ad targeting parameters that are prohibited. It disclosed that advertisers can tailor ads to custom audiences鈥攍ists of individuals that advertisers can upload to the platform鈥攂ut are prohibited from using these targeting options 鈥渢o discriminate against, harass, provoke, or disparage users or to engage in predatory advertising practices.鈥15 None of these terms are defined, and the company does not clarify how it would detect breaches of the policy or what the penalty might be for doing so.
In addition to uploading their own custom audience lists, advertisers can also select from among Facebook鈥檚 algorithmically-determined audience categories, which are based on profiles that Facebook has created from people鈥檚 online and offline activities. These profiles can include the content a user posts, the accounts and pages they follow, the content they like or otherwise engage with, and the known and/or inferred interests of the other users they are connected to on the platform. These targeting options, however, are only visible when logged into the platform and going through the process of placing ads. As a result, only Facebook account holders who take the time to investigate the platform鈥檚 audience categories can know what they are. This is how researchers and investigative journalists have discovered the existence of racist16 and otherwise problematic audience categories,17 alerting public opinion to the dark side of targeted advertising and leading companies to remove the offending categories. Despite calls from civil society (including RDR) to publish the list of available categories, companies have thus far declined to do so, making any kind of systematic oversight impossible.
The Challenge of Effective Enforcement
Privacy law is key to preventing targeted advertising systems from profiling and targeting people in dangerous and harmful ways. But a law is only as good as its enforcement. In that, the EU鈥檚 challenges in enforcing the General Data Protection Regulation (GDPR) offer important lessons. National-level data protection agencies are under-funded with massive case backlogs, and critics worry that fines are not high enough, nor are other penalties sufficiently punitive, to force a change in industry practices.18 Enforcement needs a bigger stick to protect privacy rights.
For example, since the GDPR went into force in 2018, only one major tech giant has been fined for a violation: In early 2019, Google was docked 50 million euros (about $54 million, which the New York Times estimated is about one-tenth of Google鈥檚 daily sales) for failing to adequately disclose how data is collected across its services for use in targeted advertising.19 A complaint filed against Facebook in May 2018 by privacy advocate Max Schrems argues that in order for users to even sign onto the company鈥檚 services (Facebook, Instagram, and WhatsApp), they are forced to agree to having their personal information harvested for targeted advertising. Such 鈥渇orced consent鈥 is illegal, the complaint argues, if the core purpose of the service is social networking鈥攁s the company states鈥攁nd not targeted advertising.20 In contrast to some major multinational Asian and European internet, mobile, and telecommunications companies who disclose that they limit collection of user information to what is directly relevant and necessary to accomplish the purpose of the service, none of the U.S.-based companies evaluated in the 2019 RDR Index (including Facebook) were found to have done so.21 While digital marketers once fretted that the GDPR would render targeted advertising a shadow of its former self, that will only happen if the law is strictly interpreted by courts and enforced.22
Whether Congress opts to confer increased authority and funding on the Federal Trade Commission (FTC) to enforce a strong privacy law, or sets up a new data protection agency, the key to success will certainly be strong enforcement authority鈥攂acked by adequate funding for the enforcement process.23 This is a non-trivial challenge, but should be considered part of the price of protecting democracy.
The Public Interest Principles for Privacy Legislation, published by 34 civil rights, consumer, and privacy organizations in late 2018, set forth baseline objectives that need to be met in order to ensure that a privacy law truly protects the public interest:
- Privacy protections must be strong, meaningful, and comprehensive.
- Data practices must protect civil rights, prevent unlawful discrimination, and advance equal opportunity.
- Governments at all levels should play a role in protecting and enforcing privacy rights.
- Legislation should provide redress for privacy violations.24
The Public Interest Principles underscore the importance for individuals to have access to a wide range of redress mechanisms, including the right of private individuals to sue companies for privacy violations. The California Consumer Privacy Act, which grants individuals a private right of action against data breaches, is now being tested by lawsuits against the videoconferencing platforms Zoom and Houseparty for sharing user data with third parties without consent.25 A national privacy law that adheres to the Public Interest Principles would also contain a private right of action, and should meet all other relevant civil rights standards.
Citations
- Campbell, Alex. 2019. 鈥淗ow Data Privacy Laws Can Fight Fake News.鈥 Just Security. (May 16, 2020).
- The California Consumer Privacy Act (CCPA), which took effect on January 1 and will be enforced starting in July, may de facto apply to much of the country, as many companies may find it more expedient to extend the rights that the CCPA confers on California residents to those in other states.
- Madrigal, Alexis C. 2012. 鈥淚鈥檓 Being Followed: How Google鈥攁nd 104 Other Companies鈥擜re Tracking Me on the Web.鈥 The Atlantic. (May 17, 2020).
- Organisation for Economic Cooperation and Development. 2013. The OECD Privacy Framework. (May 17, 2020).
- Ranking Digital Rights. 2019. Corporate Accountability Index. Indicator P9: Collection of user information from third parties. Washington, D.C.: 国产视频.
- Ranking Digital Rights. 2019. Corporate Accountability Index. Indicator P3: Collection of user information. Washington, D.C.: 国产视频.
- Facebook. 2018. 鈥淒ata Policy.鈥 (May 16, 2020).
- Necessary and Proportionate: International Principles on the Application of Human Rights to Communications Surveillance. 2014.
- Ranking Digital Rights. 2019. Corporate Accountability Index. Washington, D.C.: 国产视频.
- Laroia, Gaurav, and David Brody. 2019. 鈥淧rivacy Rights Are Civil Rights. We Need to Protect Them.鈥 Free Press. (May 18, 2020).
- Wachter, Sandra. 2019. Affinity Profiling and Discrimination by Association in Online Behavioural Advertising. Rochester, NY: Social Science Research Network. SSRN Scholarly Paper. (February 11, 2020).
- Sankin, Aaron. 2020. 鈥淲ant to Find a Misinformed Public? Facebook鈥檚 Already Done It.鈥 The Markup. (May 7, 2020).
- Regulating so-called 鈥渄ark patterns鈥 would help ensure that consent is freely given. See Darlington, Alexander. 鈥淒ark Patterns.鈥 (May 17, 2020).
- Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington D.C.: 国产视频. (May 7, 2020).
- Facebook. n.d. 鈥淎dvertising Policies.鈥 (May 15, 2020). In contrast, see National Fair Housing Alliance. 2019. 鈥淔acebook Settlement.鈥 (May 17, 2020).
- Julia Angwin, Madeleine Varner. 2017. 鈥淔acebook Enabled Advertisers to Reach 鈥楯ew Haters.鈥欌 ProPublica. (March 16, 2020).
- Sankin, Aaron. 2020. 鈥淲ant to Find a Misinformed Public? Facebook鈥檚 Already Done It.鈥 The Markup. (May 7, 2020).
- Satariano, Adam. 2020. 鈥淓urope鈥檚 Privacy Law Hasn鈥檛 Shown Its Teeth, Frustrating Advocates.鈥 The New York Times. (May 16, 2020).
- Satariano, Adam. 2019. 鈥淕oogle Is Fined $57 Million Under Europe鈥檚 Data Privacy Law.鈥 The New York Times. (May 16, 2020).
- NOYB. n.d. 鈥淔orced Consent (DPAs in Austria, Belgium, France, Germany and Ireland).鈥 noyb.eu. (May 16, 2020).Lomas, Natasha. 2018. 鈥淔acebook, Google Face First GDPR Complaints over 鈥楩orced Consent.鈥欌 TechCrunch. (May 16, 2020).
- Ranking Digital Rights. 2019. Corporate Accountability Index. Indicator P3: Collection of user information. Washington, D.C.: 国产视频. see section 5.3, Privacy Gaps: .
- Naidu, Prash. 2019. 鈥淲hy Advertisers Are Staring down the Barrel of Big GDPR Fines.鈥 Marketing Week. (May 16, 2020).
- Rich, Jessica. 2019. 鈥淥pinion | Give the F.T.C. Some Teeth to Guard Our Privacy.鈥 The New York Times. (May 16, 2020).
- U.S. PIRG. 2018. 鈥淧ublic Interest Privacy Principles.鈥 (May 16, 2020).
- JD Supra. 2020. 鈥淧rivacy Suits Against Zoom and Houseparty Test the CCPA鈥檚 Private Right of Action.鈥 Data Security Law Blog. (May 16, 2020).
Good Content Governance Requires Good Corporate Governance
In our previous report, we called for greater transparency about the processes and technologies that digital platforms use to govern content shared by users, including advertisers. In this report, we have outlined the privacy and data protection measures necessary to lessen content-related harms that are rooted in the algorithmically curated personalization of users鈥 online experiences. We've also pointed out ways that advertising law needs to be upgraded. There is another vector for legislative and regulatory action: To rein in the tech giants鈥 power and hold them accountable to the public interest, lawmakers and policy advocates must prioritize corporate governance and shareholder accountability.
Good corporate governance is a prerequisite for good content governance by social media platforms. In order to identify, prevent, and mitigate social harms caused by the amplification and targeting of dangerous content, Facebook, Google, and Twitter should be subject to strong institutional oversight. Shareholders, a growing number of whom are concerned with companies鈥 social impact and governance, need to be empowered to hold companies responsible for addressing their social and human rights impacts. Shareholder empowerment can in turn strengthen the connection between shareholders and the ecosystem of non-governmental actors鈥擭GOs, unions, consumer advocates, academic researchers, and journalists among others鈥攚hose expertise and perspective are essential for companies to be able to fully understand and address their social impact.
Empower Shareholders to Address Environmental, Social, and Governance Issues by Phasing Out Dual-class Shares
The case for shareholder empowerment was on display at Facebook鈥檚 2019 annual shareholder meeting. Sixty-eight percent of the company鈥檚 shareholders who hold normal 鈥淐lass A鈥 shares (one share, one vote) voted for CEO Mark Zuckerberg to relinquish his position as board chairman to an 鈥渋ndependent member of the Board.鈥 The supporting statement for the resolution filed by Trillium Asset Management, and backed by a number of well-known mutual fund companies as well as several major state pension funds, called out Facebook management for 鈥渕issing, or mishandling, a number of severe controversies, increasing risk exposure and costs to shareholders,鈥 which board oversight led by an independent board chair might have helped prevent. 1
Yet even with a solid majority, the shareholder resolution failed. Why? Because Zuckerberg and other members of his inner circle hold 鈥淐lass B鈥 shares weighted at 10 votes per share, giving him effective control over 60 percent of shareholder votes at that meeting.2 As a result, only 20 percent of the total vote supported separation of CEO and board chair, a major governance reform that would have strengthened the board of directors鈥 power to oversee and potentially overrule the CEO鈥檚 decisions.3
The failed resolution was just one of several proposals spurred by investor concerns that Facebook鈥檚 management had failed to prevent harms to users and mitigate serious social risks, including privacy breaches and Russian meddling in the 2016 U.S. presidential election.4 Ironically, one of the other failed resolutions sought to end Facebook鈥檚 dual-class share structure altogether鈥攆iled to call attention to the problem itself, since it could only pass if Zuckerberg agreed to it.5
In 2019, shareholders also filed unsuccessful proposals with Alphabet (Google鈥檚 parent company), Facebook, and Twitter that would have required the companies to report to shareholders on various aspects of their content governance policies and enforcement. For example, the resolution filed with Alphabet sought a report 鈥渞eviewing the efficacy of its enforcement of Google鈥檚 terms of service related to content policies and assessing the risks posed by content management controversies related to election interference, freedom of expression, and the spread of hate speech, to the company鈥檚 finances, operations, and reputation.鈥 Google responded that its public reporting on these matters was already significant.6 Like Facebook, Alphabet also has a dual-class share structure. Twitter does not.
The shareholder resolutions described above reflect that a growing number of mutual funds and institutional investors pursue environmental, social, and governance (ESG) investment strategies that favor companies that are environmentally sustainable, socially responsible, and well governed. Global sustainable investments jumped by 34 percent to nearly $40 trillion between 2016 and 2018.7 In 2019, nearly half of U.S. institutional investors had either included ESG factors in their investment decisions or were considering doing so.8 According to the Sustainable Investments Institute, the percentage of shareholder resolutions addressing ESG issues grew by just 12 percent in the past 10 years, but average support of these resolutions has grown by a whopping 40 percent.9 Most important for people concerned with platform accountability, the number of resolutions filed with companies covered by the RDR Index that focused on issues affecting internet users鈥 rights jumped from just two in 2015 to 12 in 2019.10
Shareholder resolutions are a tool for investors to push companies in a more sustainable, socially responsible direction and improve their governance.11 Even when they do not pass, the attention that they bring to specific issues often prods companies to engage with investors and explain to the public how they are addressing the problems raised. But in cases like the resolution that would have removed Zuckerberg as chairman of the board, resolutions cannot ultimately curb a CEO鈥檚 unrestrained power if he is shielded by a dual-class share structure. In 2019, the Securities and Exchange Commission鈥檚 (SEC) then-Commissioner Robert Jackson, warning of the consequences if CEOs cannot be fired, making them effectively monarchs of private kingdoms, called for reforms. The SEC, he proposed, should require companies to phase out dual-class shares so that shareholders can actually be in a position to hold management accountable for failing to identify and mitigate risks.12 The SEC has not only disregarded his recommendation but has moved in the opposite direction, proposing a rule change that would make it harder for shareholders to file proposals.13 Congress could compel the SEC to implement dual-class share reform and other measures that would strengthen corporate governance and oversight.
Investors Need More and Better Disclosure
Investors concerned with ESG risks are also pushing for companies to disclose more information about how they identify, track, and mitigate these risks. Such disclosures inform portfolio construction, and some investors use it to inform their engagement with companies about how to improve.14 In 2018, a group of investors representing $5 trillion of assets under management petitioned the SEC to require corporate disclosure of ESG risks, to no avail.15
Proponents argue that ESG disclosure requirements would bring the United States into step with broader global trends.16 Since 2018, large companies in the EU are required by law to disclose non-financial information about 鈥渢he way they operate and manage social and environmental challenges,鈥 including human rights-related policies and risks.17 Meanwhile several organizations have been working to improve the quality of disclosures鈥攚hether mandatory or voluntary鈥攂y developing clear and consistent standards.18 Most relevant in the U.S. investment and regulatory context is the San Francisco-based Sustainability Accounting Standards Board (SASB). Their industry-specific reporting standards include guidelines for digital platforms to disclose 鈥渄ata standards, advertising, and freedom of expression,鈥 including a 鈥渄escription of policies and practices relating to behavioral advertising and user privacy.鈥19 SASB is now in the early stages of developing disclosure standards related to content moderation.20
While U.S. companies are not required by law to disclose ESG information, 120 U.S.-listed companies now use SASB reporting standards, including a few in the tech and communications sector, such as Adobe, Netflix, and Salesforce.21 A handful of companies, including the crafting platform Etsy, are leading the way by furnishing audited SASB-based disclosures along with their official SEC filing documents, even though they are not required to do so. (They also included disclosures using standards developed by the Global Reporting Initiative, which target a broader set of stakeholders beyond investors and are widely used in Europe and internationally.)22
These companies are responding to market demand: According to the data provider Morningstar, in 2019 a record $20.6 billion flowed into U.S. sustainable investment funds, almost quadrupling net inflows from 2018.23 Those investment funds need better ESG data in order to make sure that the companies they invest in actually meet basic ESG standards. In January 2020, Larry Fink, the CEO of the global investment management firm Blackrock, called on all companies in which Blackrock invests to follow SASB鈥檚 standards to disclose relevant ESG information or similar data in a way that is relevant to their business.24 If Congress were to compel the SEC to require ESG disclosure, investors would be empowered to push companies to address many of their non-financial risks, including the social impact of targeted advertising and algorithmic systems.
Despite the growing investor demand, Facebook, Google, and Twitter do not systematically include ESG information in their disclosures to investors. Unlike many European telecommunications companies covered by the RDR Index, their annual reports do not cover the full range of social and human rights information that investors have signaled from shareholder resolutions and recent engagements that they consider potentially material. By contrast, Telef贸nica (headquartered in Spain)鈥攖he top-ranking telecommunications company in the 2019 RDR Index鈥攑ublishes an annual Consolidated Management Report covering the full range of ways that the company is working to achieve 鈥渧alue for all our stakeholders.鈥25 The company鈥檚 Responsible Business portal offers a clear catalogue of information about how they track and assess impacts, including transparency reports and impact assessment data.26
The three U.S.-headquartered social media giants that are the focus of this report do disclose a great deal of information related to their social and human rights impacts in a range of policy documents, transparency reports, blog posts, and other materials published online. But the material is scattered across their websites and platforms, testing the research capacity of investors seeking to understand how these companies manage their human rights risks, or how they understand and track their social impact. The RDR Index tracks such disclosures and grades companies according to the comprehensiveness and quality of their disclosures about policies and practices affecting the human rights of internet users. A high score in the RDR Index indicates strong disclosure at least in some areas. But our most recent research found very poor disclosure by these three companies related to the social and human rights impact of their targeted advertising and algorithmic systems.27
A growing number of investors use the RDR Index and its methodology to evaluate company disclosures, and cite RDR data in shareholder resolutions.28 From such resolutions and SASB鈥檚 standards-development process it is clear that the investor community already has behavioral advertising on its radar and is considering the material risk associated with digital platforms鈥 content moderation practices. The 2020 RDR Index, forthcoming in February 2021, will report on whether these companies have made any progress in their policies and disclosures by then.29
Targeted Advertising Revenue Disclosure
The human rights issues associated with targeted advertising have prompted a growing number of commentators to call for a ban on targeted advertising as the best way to protect privacy and stem the spread of dangerous content.30 At very least, the burden should be on companies that rely on targeted advertising to prove that their business model can be modified to protect human rights and still remain tenable. So far they have failed to do so.
An important step in that direction would be for companies to disclose the extent to which they rely on targeted advertising as a percentage of revenue鈥攁s well as whether that percentage is trending up or down. Doing so would help business leaders and investors quantify the driving force behind maintaining these systems as well as the extent of content-related social impact and risks that the business is taking on. Only with this cost-benefit analysis can appropriate steps be taken to identify and mitigate the negative impact of targeted advertising before the problems spiral out of control.
Google and YouTube鈥檚 parent company, Alphabet, as well as Facebook and Twitter all disclose the percentage of their total revenue earned from advertising in general: In 2019, Facebook disclosed that it earned nearly all of its revenue from advertising (98.5 percent),31 while Twitter earned 86 percent from advertising,32 and Google鈥檚 advertising revenue (including YouTube) was 83 percent of its total earnings for the year.33 None of these companies, however, specifies how much of their revenue was earned from targeted advertising, let alone from different levels of targeting.
For example, Google鈥檚 AdSense platform offers advertisers several different options, including contextual advertising and behavioral targeting. In contextual advertising, advertisements are linked to the specific keywords or keyphrases in the content alongside which the ads will be shown. For example, a cooking tutorial might be paired with ads for a grocery delivery service or gourmet cookware, regardless of the viewer鈥檚 demographics or personal behavior.
In behavioral targeting, ads are targeted to specific user characteristics based on profiles that the platform has built about interests, demographics, etc., that users have either declared or that algorithms have inferred after processing user data.34 But Alphabet doesn鈥檛 disclose the revenue breakdown for different types of advertising, or whether YouTube depends more heavily on behavioral advertising than other Google-operated platforms such as Search.35 Facebook鈥檚 Advertising for Business page reflects the company鈥檚 primary focus on behavioral targeting.36
Microsoft and Apple, which rely heavily on other types of revenue like service fees, software license purchases, and hardware sales, do not actually disclose their advertising earnings as a distinct revenue percentage, implying that it is quite small as a percentage of total revenue.37 However, a small share of revenue for such massive companies amounts to serious money: In 2019, Microsoft鈥檚 Bing search engine generated more than $7.5 billion dollars in advertising revenue, more than LinkedIn or the Surface hardware line, and three times more than Twitter鈥檚 total earnings.38 As for Apple, it is the only digital platform company in the RDR Index that does not track users across the internet.39 While the company鈥檚 restrained approach to monetizing user data hampers its competitiveness against Google and Facebook, App Store ad revenue still amounted to 13 percent of total revenue for the 2017 fiscal year.40
The bottom line is that advertising, much of it behavioral, is a growing revenue stream for both Apple and Microsoft, creating pressure to collect and monetize more and more user data.41 That trend in turn points to growing social and human rights risks that investors, policymakers, and other civil society stakeholders need to be aware of鈥攁nd why all companies with targeted advertising revenue should disclose relevant information about it.
Due Diligence: Understanding the Full Scope of Platforms鈥 Social Impact
A key piece of information that investors expect from companies is whether they conduct meaningful and comprehensive due diligence about their ESG risks. Due diligence and impact assessment enables digital platforms to understand the full social impact of their targeted advertising business models, algorithmic systems, and all other processes and activities. Results of impact assessments inform how their policies, systems, and processes need to change in order to mitigate鈥攁nd ideally prevent鈥攈arms to individuals and societies that are either caused or exacerbated by their business operations.
The UN Guiding Principles on Business and Human Rights set forth specific elements of due diligence that should guide companies鈥 approach to addressing and mitigating human rights risks. Specifically, Operational Principle 16 states that companies should start by adopting formal policies publicly expressing their commitment to international human rights principles and standards.42 A credible public commitment, published in official company documents (not just stated in ad hoc interviews or speeches by executives), confirms that the company, from the board of directors down, is aware of how its actions may affect users and their communities, and is committed to address, minimize, and ideally prevent any negative impacts. A public statement on its own may not amount to much, of course, unless there is verifiable evidence that it has been implemented. But it is the first step toward responsible practice and accountability: It provides a hook for employees, responsible investors, civil society advocates, and others to urge the company鈥檚 management to implement policies and practices in line with their commitments.
Facebook, Google, and Twitter make official public commitments to users鈥 freedom of expression and privacy, particularly in the face of efforts by governments around the world to censor content or grant access to user data and communications.43 The charter of Facebook鈥檚 new Oversight Board extends a general human rights commitment to content moderation in relation to its own private rules.44 Yet as of May 2020, Facebook and Twitter have made no explicit, formal commitment to protect human rights as they develop and use algorithmic systems.45 While Google does commit to avoid developing 鈥渢echnologies whose purpose contravenes widely accepted principles of international law and human rights,鈥 it is not clear whether the company鈥檚 commitment extends to the human rights implications of algorithmic systems already developed and deployed on YouTube and other platforms whose purpose is to moderate, amplify, shape, and target content.46
Once a company makes a commitment to respect human rights across all of its business activities, its decision makers can only implement that commitment effectively if they actually understand whose human rights might be affected, and how. Due diligence processes enable the company to gain such an understanding. As the UN Guiding Principles Operating Principle 17 stipulates: 鈥淭he process should include assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed.鈥47
Responsible companies that handle user information routinely carry out due diligence and impact assessment to identify security flaws and privacy weaknesses. Security audits are commonplace across the tech sector and legally mandated in many jurisdictions. Privacy impact assessments (PIAs) are now required by law in the EU and a growing number of countries.48 In the United States, Facebook, Google, and Twitter have been compelled by FTC consent orders to conduct PIAs in the wake of lapses that provoked complaints by privacy advocates, though as we have discussed in earlier parts of this report, such assessments are by no means adequate to address the negative social impact of the platforms鈥 exploitation of user information. Members of the GNI (which include Google and Facebook) carry out regular due diligence to identify and mitigate threats to users鈥 freedom of expression and privacy caused by censorship and surveillance demands that the platforms receive from governments all over the world.49 But those assessments do not seek evidence that companies carry out due diligence on human rights threats caused by company actions that are unrelated to government demands or requirements.
Given companies鈥 lack of transparency around targeted advertisements and their significant financial incentives to deploy the technology quickly, digital platforms鈥 human rights commitments lack full credibility if they fail to conduct regular and systematic due diligence on their algorithmic systems and targeted advertising business models. Due diligence alone is only credible when accompanied by evidence that companies have a process to revise or implement changes based on their findings.
Responding to longstanding demands from civil rights advocates, in 2018, Facebook initiated an audit of the platform鈥檚 impact on marginalized groups and people of color in the United States. The two published updates so far detail the company鈥檚 shortcomings with respect to voting rights50 and to hate speech, discriminatory ad targeting, the 2020 Census, and civil rights oversight within the company.51 The process, led by respected civil rights advocate Laura Murphy, also provided updates on Facebook鈥檚 efforts and recommended additional changes. Still, civil rights groups have pointed out that the mere existence of the audit process falls short, however, of what is needed to ensure that Facebook鈥檚 platform is not used to undermine civil rights by facilitating discriminatory targeting and the spread of harmful misinformation that disproportionately harms marginalized groups.52
The RDR Index evaluates whether companies conduct regular, comprehensive, and credible due diligence, such as human rights impact assessments, to identify how all aspects of their business affect freedom of expression and privacy and to mitigate any risks posed by those impacts. This includes due diligence pertaining to governments and regulations, to the company鈥檚 own policy enforcement processes, and to targeted advertising and algorithmic decision-making systems.
As of May 2020, none of the three social media giants鈥擥oogle, Facebook, and Twitter鈥攖hat are the focus of this report series disclosed any evidence that they conduct systematic human rights due diligence or impact assessments around their use of algorithmic systems. Nor do they disclose any impact assessment process to understand how their targeted advertising policies and practices affect human rights, or the social impact of their terms of service governing content, and related enforcement mechanisms.
Only one U.S. company evaluated by the RDR Index鈥擬icrosoft鈥攄iscloses that it conducts impact assessments on its development and use of algorithmic systems.53 Microsoft, plus Verizon Media, are the only U.S. companies in the RDR Index that disclose impact assessments on their terms of service enforcement.54 As we have highlighted throughout this report, the uneven enforcement of platforms鈥 content policies contributes to human rights harms, both when dangerous content is left up and when protected speech is erroneously removed. Companies should consider how effectiveness and accuracy of their enforcement processes impact human rights alongside the impact of the rules themselves.
Among the companies covered by the RDR Index, the best examples of strong impact assessment around algorithmic systems and artificial intelligence come from European telecommunications companies. As investors increasingly factor ESG considerations into their decision-making, this raises concerns about U.S. companies鈥 competitiveness in global financial markets. Telef贸nica, for example, which operates in multiple European markets and across Latin America, clearly discloses that it assesses the freedom of expression, privacy, and discrimination risks associated with these systems, and that it conducts additional evaluations whenever these assessments identify concerns.55
Comprehensive due diligence and risk assessment by European companies prepares them well for future regulatory requirements. In April 2020, a group of 105听international investors representing听$5 trillion in assets under management coordinated by the Investor Alliance for Human Rights called on governments to require companies to conduct ongoing risk assessment and human rights due diligence. They argued that such a step is not only a moral imperative but important for improving public trust in both business and government. 56听
In 2017, in France, a new 鈥渄uty of vigilance鈥 law went into effect for French multinationals, making strong human rights oversight and risk assessment mandatory.57 Since then, political momentum has been building across the EU鈥攚ith direct support even from some multinationals seeking an even regulatory playing field.58 With proposals for due diligence laws under varying stages of discussion or consideration in 13 European countries, in April 2020, EU Commissioner for Justice Didier Reynders committed to move forward with mandatory environmental and human rights due diligence legislation for EU companies in 2021.59
The U.S. Congress has taken one small step: In July 2019, the House Financial Services Committee held a hearing to discuss a draft bill to create an annual requirement for public companies to assess and report on their human rights risk exposure. The bill has not been formally introduced.60 Risks to the rights of users of digital platforms including privacy, information, and non-discrimination were not mentioned in the discussion, nor have they been raised in discussions of potential EU legislation. This gap must be addressed as an important tool for holding digital platforms accountable for all of their human rights impacts, including the social impact of algorithmic systems and targeted advertising business models.
Citations
- Trillium Asset Management. 2019. 鈥淔acebook, Inc. – Independent Board Chairman (2019).鈥 (May 16, 2020). Key supporters listed at:
- Wolverton, Troy. 2018. 鈥淎t Facebook鈥檚 Annual Meeting, Mark Zuckerberg Stuck to His Talking Points 鈥 and Ignored Some of Shareholders鈥 Biggest Concerns.鈥 Business Insider. (May 16, 2020).
- Fitzgerald, Meghan. 2019. 鈥淲hy 68% of Facebook Investors Voted to Oust Zuckerberg as Chairman.鈥 Yahoo! Finance. (May 16, 2020).
- Mayer, Jane. 2018. 鈥淗ow Russia Helped Swing the Election for Trump.鈥 The New Yorker. (May 17, 2020).
- Stewart, Emily. 2019. 鈥淔acebook Will Never Strip Away Mark Zuckerberg鈥檚 Power.鈥 Vox. (May 16, 2020).
- Alphabet. 2019. Notice of 2019 Annual Meeting of Stockholders and Proxy Statement. 鈥淧roposal Number 16: Stockholder Proposal Regarding a Report on Content Governance.鈥 U.S. Securities and Exchange Commission.鈥
- Chasan, Emily. 2019. 鈥淕lobal Sustainable Investments Rise 34 Percent to $30.7 Trillion.鈥 Bloomberg.com. (May 16, 2020).
- Callan Institute. 2019. 2019 ESG Survey.
- Sustainable Investments Institute. 2020. Proxy Preview 2020 Shows Jump in ESG Shareholder Proposals as SEC Prepares to Restrict Shareholder Rights. Press release. (May 16, 2020).
- MacKinnon, Rebecca, Melissa Brown, and Jasmine Arooni. 2020. Digital Rights 2020 Outlook: Market Realities and Regulation Are Raising the Bar. Washington, D.C.: 国产视频.
- Mooney, Attracta. 2020. 鈥淐oronavirus Forces Investor Rethink on Social Issues.鈥 Financial Times. (May 16, 2020).
- SEC Commissioner Robert J. Jackson, Jr. 2018. 鈥淪EC.Gov | Perpetual Dual-Class Stock: The Case Against Corporate Royalty.鈥 Presented at San Francisco, CA. (May 16, 2020).
- Setty, Ganesh. 2019. 鈥淪hareholders Would Have Tougher Time Submitting Resolutions under SEC鈥檚 Proposed Rule.鈥 CNBC. (May 16, 2020).
- World Economic Forum. 2020. Embracing the New Age of Materiality: Harnessing the Pace of Change in ESG. White paper. www3.weforum.org/docs/WEF_Embracing_the_New_Age_of_Materiality_2020.pdf
- Williams, Cynthia A. et al. 2018. Letter to the Secretary of the Securities and Exchange Commission Brent J. Fields. U.S. Securities and Exchange Commission.
- Clancy, Heather. 2019. 鈥淚nvestor Interest Fuels SASB Adoption, Inspires New GRI Tax Disclosure Standard | Greenbiz.鈥 GreenBiz. (May 16, 2020).
- European Commission. 鈥淣on-Financial Reporting.鈥 European Commission – European Commission. (May 16, 2020).
- Rozen, Miriam. 2020. 鈥淓thical Investors Want More Proof of Good Deeds.鈥 Financial Times. (May 16, 2020).
- Sustainability Accounting Standards Board. 鈥淪tandards Overview.鈥 (May 16, 2020).Sustainability Accounting Standards Board. 2018. Internet Media & Services Sustainability Accounting Standard. Industry standard.
- Waters, Greg. 2019. 鈥淪ASB to Research Content Moderation on Internet Platforms.鈥 Sustainability Accounting Standards Board. (May 16, 2020).
- Sustainability Accounting Standards Board. 鈥淐ompanies Reporting with SASB Standards.鈥 (May 16, 2020).
- Ashwell, Ben. 2019. 鈥淢ore than 100 Companies Using SASB Standards.鈥 IR Magazine. Mirchandani, Bhakti. 2019. 鈥淔inally a Way to Assure Sustainability and Impact! Vornado, Etsy, and LeapFrog Lead the Charge.鈥 Forbes. (May 16, 2020).
- Flood, Chris. 2020. 鈥淩ecord Sums Deployed into Sustainable Investment Funds.鈥 Financial Times. (May 16, 2020).
- Fink, Larry. 2020. 鈥淎 Fundamental Reshaping of Finance.鈥 BlackRock. (May 16, 2020).
- Telef贸nica. 2019. Summary Consolidated Management Report 2019.
- Telef贸nica. 鈥淩esponsible Business in Telef贸nica.鈥 (May 16, 2020).
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington D.C.: 国产视频. www.rankingdigitalrights/pilot-report-2020
- MacKinnon, Rebecca. 2019. 鈥淚nvestors Urge Companies to Use the RDR Index to Improve Their Respect for Users鈥 Digital Rights.鈥 Ranking Digital Rights. (May 16, 2020).
- Ranking Digital Rights. 2020. 2020 Ranking Digital Rights Corporate Accountability Index Draft Indicators. Washington, D.C.: 国产视频.
- Edelman, Gilad. 2020. 鈥淲hy Don鈥檛 We Just Ban Targeted Advertising?鈥 Wired. (May 16, 2020).Lomas, Natasha. 2019. 鈥淭witter鈥檚 Political Ads Ban Is a Distraction from the Real Problem with Platforms | TechCrunch.鈥 Tech Crunch. (May 16, 2020).
- Facebook. 2020. Facebook Reports Fourth Quarter and Full Year 2019 Results. Press release. (May 16, 2020).
- Twitter. 2020. Fiscal Year 2019 Annual Report. Press release.
- Alphabet. 2020. Alphabet Announced Fourth Quarter and Fiscal Year 2019 Results. Press release.
- Google. n.d. 鈥淗ow Ads Are Targeted to Your Site.鈥 AdSense Help. (May 16, 2020).
- Google. n.d. 鈥湽悠 Targeting for Video Campaigns.鈥 YouTube Help. (May 16, 2020).
- Facebook. n.d. 鈥淗elp Your Ads Find the People Who Will Love Your Business.鈥 Facebook for Business. (May 16, 2020).
- Apple. 2019. Apple Reports Fourth Quarter Results. Press release. (May 16, 2020).Microsoft. Earnings Release FY20 Q1. Press release. (May 16, 2020).
- Ovide, Shira. 2019. 鈥淏ing鈥檚 Not the Laughingstock of Technology Anymore.鈥 Bloomberg. (May 7, 2020).
- Ranking Digital Rights. 2019. Corporate Accountability Index. Indicator P9: Collection of user information from third parties. Washington, D.C: 国产视频.
- Mickle, Tripp, and Georgia Wells. 2018. 鈥淎pple Looks to Expand Advertising Business with New Network for Apps.鈥 Wall Street Journal. (May 7, 2020).
- Slefo, George P. 2019. 鈥淎pple鈥檚 Ad Business Borrows a Page from Facebook.鈥 (May 7, 2020).Monllos, Kristina. 2019. 鈥溾楻eclaim a Seat at the Table鈥: Microsoft Is Diversifying Its Advertising Business.鈥 Digiday. (May 7, 2020).
- OHCHR. 2011. Guiding Principles on Business and Human Rights: Implementing the United Nations 鈥淩espect, Protect and Remedy鈥 Framework. Geneva: United Nations Office of the High Commissioner on Human Rights.
- Ranking Digital Rights. 2019. Corporate Accountability Index. Indicator G1: Policy commitment. Washington, D.C.: 国产视频.
- Facebook. 2019. Oversight Board Charter.
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington, D.C.: 国产视频. www.rankingdigitalrights/pilot-report-2020
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington, D.C.: 国产视频. www.rankingdigitalrights/pilot-report-2020
- OHCHR. 2011. Guiding Principles on Business and Human Rights: Implementing the United Nations 鈥淩espect, Protect and Remedy鈥 Framework. Geneva: United Nations Office of the High Commissioner on Human Rights.
- GDPR.EU. 2018. 鈥淒ata Protection Impact Assessment (DPIA).鈥 GDPR.eu. (May 16, 2020).
- Global Network Initiative. 2020. The GNI Principles at Work: Public Report on the Third Cycle of Independent Assessments of GNI Company Members 2018/2019.
- Sandberg, Sheryl. 2018. 鈥淎n Update on Our Civil Rights Audit.鈥 Facebook Newsroom. (May 11, 2020).
- Sandberg, Sheryl. 2019. 鈥淎 Second Update on Our Civil Rights Audit.鈥 Facebook Newsroom. (May 11, 2020).
- Scurato, Carmen. 2019. Facebook vs. Hate: An Analysis of Facebook鈥檚 Work to Disrupt Online Hate and the Path to Fully Protect Users. Free Press.
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems 鈥 Pilot Study and Lessons Learned. Washington, D.C.: 国产视频. www.rankingdigitalrights/pilot-report-2020
- See Governance section, Ranking Digital Rights. 2019. Corporate Accountability Index. Washington, D.C.: 国产视频.
- Telef贸nica. 2018. Integrated Management Report 2018.
- Investor Alliance for Human Rights. 2019. The Investor Case for Mandatory Human Rights Due Diligence.
- Altschuller, Sarah A., and Amy Lehr. 2017. 鈥淭he French Duty of Vigilance Law: What You Need to Know.鈥 Corporate Social Responsibility and the Law. (May 16, 2020).
- Business & Human Rights Resource Center. 2019. 鈥淣ational Movements for Mandatory Human Rights Due Diligence in European Countries.鈥 (May 16, 2020).Business & Human Rights Resource Centre. 2020. 鈥淟andmark Report on 1,000 European Companies Shows the Need for Human Rights Due Diligence Laws.鈥 (May 16, 2020).Business & Human Rights Resource Centre. 鈥淢EPs & Companies Call for EU-Level Human Rights Due Diligence Legislation.鈥 (May 16, 2020).
- 鈥淐ommission Announcement of Due Diligence Legislation.鈥 2020. Heidi Hautala. (May 16, 2020).European Coalition for Corporate Justice. 2020. Model Legislation on Corporate Responsibility to Respect Human Rights and the Environment – European Coalition for Corporate Justice. Legal brief. (May 16, 2020).
- Kublin, Craig. 2019. 鈥淯S House Financial Services Committee Holds Landmark Hearing on ESG Reporting.鈥 Environment, Land & Resources. (May 16, 2020).Zaidi, Ali. 2019. 鈥淚NSIGHT: Pending Federal ESG Legislation Could Yield Significant and Step-Wise Change.鈥 Bloomberg Law. (May 16, 2020).Corporate Human Rights Risk Assessment, Prevent, and Mitigation Act. 2019. (U.S. House of Representatives)
Without Civil Society, Platform Accountability is a Pipe Dream
One of this report鈥檚 aims has been to suggest how Congress can most constructively and effectively hold digital platforms accountable for the social impact of their targeted advertising business models and algorithmic systems. Passing laws and boosting the power of regulatory agencies are essential steps for strengthening the governance and accountability of digital platforms. But even if law could keep up with technological change, responsive regulation is not possible without the work of a robust and independent civil society.
Key civil society actors include independent social science researchers and investigative journalists, watchdog and grassroots organizations, shareholders and investor alliances, and even allies and employee activists within the companies themselves. Researchers and journalists document and investigate harms while creating an evidence base all stakeholders can access and use to push companies to respond and improve policies and practices. Neither this report series, nor RDR鈥檚 work more generally, would be possible without the work of these people and organizations, many of whom are cited in our footnotes. The evidence and data that they produce helps advocacy and grassroots organizations, policymakers and investors not only identify what needs to change but also gain the credibility they need to drive the policy formulation, pressure campaigns, and responsible investing standards necessary to change it.
In the absence of effective regulation, independent civil society has an especially vital role to play in developing and deploying accountability mechanisms.
In the absence of effective regulation, independent civil society has an especially vital role to play in developing and deploying accountability mechanisms that fill key gaps where law and regulation have failed to hold companies responsible for their impact on society.
RDR is one of many such efforts, working closely with a network of researchers and civil society advocates and relying heavily on the work of investigative journalists, many of whom have been cited in this report. The RDR Index methodology, used as the framework for ranking global digital platforms and telecommunications companies on their respect for users鈥 human rights, was informed by two decades of academic research and investigative journalism, as well as by documentation from human rights advocates around the world. This work produced collectively by a broad range of civil society actors over time has clarified exactly how company policies, processes, and practices can affect users鈥 human rights. Once companies are evaluated by the annual RDR Index, civil society advocates use it to call for specific changes by companies and governments. Shareholders use the data to inform their engagement with companies as well as formal shareholder resolutions. This report itself is an example of how the RDR Index and related research can be used by policymakers to guide legal and regulatory reforms.
For social media platforms serious about respecting users鈥 rights and civil liberties and protecting democratic discourse, compliance with laws and engaging with policymakers is not enough. The only way to fully understand how their business operations affect society and users rights, is to consult with a wide range of stakeholders, whether through responding to open letters, participating in dialogue, or joining more formal multi stakeholder processes. For this reason, the RDR Index methodology evaluates companies on whether they engage with a range of stakeholders, particularly with those who face human rights risks in connection with their online activities. Where regulatory gaps exist or where law does not help companies understand their risk and demonstrate to stakeholders that those risks are being adequately addressed, companies should participate in processes and mechanisms through which stakeholders can hold them accountable.
Ellery Biddle
At a minimum, RDR researchers look for evidence that a company initiates or participates in meetings with stakeholders that represent, advocate on behalf of, or are people whose freedom of expression and privacy are directly affected by the company鈥檚 business. Companies that score well in the RDR Index do more than merely engage with responsible investors, civil society groups, and academic experts about their social impacts and human rights risks. The best performing companies stretch their stakeholder engagement beyond dialogue to accountability by participating in multi-stakeholder initiatives, such as the Global Network Initiative (GNI). Google and Facebook (but not Twitter) are members of GNI, which brings companies together with NGOs, investors, and academics to address the human rights risks related to government censorship and surveillance around the world.1 As members, they commit to respect and protect users鈥 freedom of expression and privacy in the face of government demands to remove content or hand over user data. As company members, they are also subject to independent assessments, overseen by a multistakeholder board, to determine whether they are satisfactorily upholding their commitments.2
Because Google and Facebook disclose evidence, verified by independent GNI assessors, that they conduct due diligence on how government demands for user data and content removal affect users鈥 human rights, they score well in the governance category of the RDR Index compared to many other companies (including Twitter), despite a serious lack of similar due diligence or disclosures related to algorithmic systems and targeted advertising.3 GNI does not address those aspects of the companies鈥 business, however.4
Companies have yet to work with civil society to build multi-stakeholder accountability mechanisms that would address the social and human rights impact of their business operations not related to government censorship and surveillance demands. Companies do engage in multi-stakeholder fora about content moderation and algorithmic systems, but without independent assessments or related accountability mechanisms. Google and Facebook are both members of the Partnership on AI (PAI), where they work with academics and NGOs to address thorny questions related to the ethics of artificial intelligence.5 While the PAI board includes non-company members, the organization has no mechanisms for accountability; companies are not subject to assessment and they are not held to a clear set of standards, including a commitment to human rights principles, as a condition of membership.
Facebook, Google, and Twitter also engage with a coalition of experts and NGOs who advocate for the Santa Clara Principles on Transparency and Accountability on Content Moderation. All three have endorsed the principles.6 But a recent evaluation by 国产视频鈥檚 Open Technology Institute found that they all fall very short in actually implementing them, disclosing inadequate information about how humans and algorithms moderate content on their platforms.7 Companies also reach out to experts and NGOs for advice and input on policies, products and programs. For example, Twitter has created a Trust and Safety Council comprising a long list of experts and organizations ranging from the Anti-Defamation League to the Center for Democracy and Technology to the National Center for Missing and Exploited Children.8 Such engagement is at the companies鈥 invitation and on their terms.
To date, the most talked-about attempt at stakeholder engagement has been the launch of the Facebook Oversight Board. Funded and staffed by an independent trust, the Oversight Board brings together an international group of experts who are tasked with adjudicating Facebook鈥檚 most difficult and controversial content moderation cases.9 Given the extraordinary complexity and sensitivity of the issues it will consider and the platform鈥檚 global reach, Facebook鈥檚 willingness to open up its content moderation processes and decisions to world-class external expertise deserves praise. Still, the Oversight Board oversees only decisions related to content removal. It does not have a say in the company鈥檚 business decisions, not least Facebook鈥檚 targeted advertising business model.10 While decisions about individual content cases will be considered binding, Zuckerberg is under no obligation to heed broader concerns its members may raise about targeted advertising and algorithmic systems. Thus the launch of the Oversight Board does nothing to change the need to hold the company accountable through strong privacy law, anti-trust enforcement, due diligence and transparency requirements, and corporate governance reforms. Rather, it serves as another reminder that effective corporate accountability mechanisms must involve multiple actors applying pressure at various points and, most important, cannot be self-designed or self-enforced.
Holding companies accountable to the public interest is the responsibility of lawmakers, in consultation with civil society. In the next section, we outline our recommendations for concrete actions Congress should take to address the current corporate accountability gap.
Citations
- Global Network Initiative. 2020. 鈥淕lobal Network Initiative.鈥 Global Network Initiative. (May 16, 2020).
- Global Network Initiative. 2020. The GNI Principles at Work: Public Report on the Third Cycle of Independent Assessments of GNI Company Members 2018/2019. .
- See Governance section, Ranking Digital Rights. 2019. Corporate Accountability Index. Washington, DC: 国产视频.
- The scope of GNI鈥檚 work does not include commercial privacy issues that are not directly related to government surveillance actions or user data demands. Nor does GNI address content moderation carried out to enforce companies鈥 own rules if the restrictions are not made in response to government demands or direct requirements. Any expansion of GNI鈥檚 scope would need the support of its board of directors. At present, GNI鈥檚 mandate does not cover the human rights risks caused by commercial business practices and mechanisms including content moderation, algorithmic systems for content recommendation and prioritization, commercial data collection and sharing, or human rights concerns related to targeted advertising business models.
- 鈥淭he Partnership on AI.鈥 (May 16, 2020).
- Crocker, Andrew et al. 2019. Who Has Your Back? Censorship Edition 2019. Electronic Frontier Foundation. (May 16, 2020).
- Singh, Spandana. 2019. Assessing YouTube, Facebook and Twitter鈥檚 Content Takedown Policies. Washington: 国产视频. (May 16, 2020).
- Twitter. 鈥淭witter Safety Partners.鈥 (May 16, 2020).
- Hern, Alex. 2020. 鈥淔acebook Judges, Journalists and Politicians on Free Speech Panel.鈥 The Guardian. (May 16, 2020).Botero-Marino, Catalina, Jamal Greene, Michael W. McConnell, and Helle Thorning-Schmidt. 2020. 鈥淥pinion | We Are a New Board Overseeing Facebook. Here鈥檚 What We鈥檒l Decide.鈥 The New York Times. (May 16, 2020).
- Wong, Julia Carrie. 2020. 鈥淲ill Facebook鈥檚 New Oversight Board Be a Radical Shift or a Reputational Shield?鈥 The Guardian. (May 16, 2020).
Key Recommendations for Policymakers
The recommendations below call on the U.S. Congress to take legislative action to pass a federal privacy law, update advertising regulations, mandate corporate disclosure and due diligence requirements, and institute governance reform. They build on research conducted for the RDR Corporate Accountability Index, as well as our experience working with advocacy groups and investors seeking to hold social media platforms accountable for their social impact. They are in addition to the recommendations for corporate transparency around content shaping and moderation published at the end of the first report in this two-part series.1
Enact federal privacy law that protects people from the harmful impact of targeted advertising.听A comprehensive federal privacy law should encompass much more than the following recommendations, which focus on necessary rules to limit the reach of misinformation and dangerous content by limiting the power and precision of content-shaping and ad-targeting algorithms.
1. Designate an existing federal agency, or create a new agency, to enforce privacy and transparency requirements applicable to digital platforms. The agency should be given the necessary authority and funding to accomplish its mission.
2. Enact strong data-minimization and purpose limitation provisions. Users should not be able to opt-in to discriminatory advertising or to the collection of data that would enable it
3. Give users very clear control over collection and sharing of user information that is not otherwise prohibited, including inferred information, that is not necessary to deliver and operate the service.听
- Companies should disclose to users and to the relevant regulatory agency what user information they collect, share, and infer; and for what purpose, and how long it is retained. This information should be independently audited.鈥淯ser information鈥 is any data that is connected to an identifiable person, or may be connected to such a person by combining datasets or utilizing algorithmic data-processing techniques.
- Companies should not collect user information from third parties, or share user information with third parties, unless the companies in question have a vendor/contractor relationship and the sharing of this user information is disclosed and directly relevant and necessary for the purpose of delivering a service to the user. The 鈥減urpose of the service鈥 does not include targeted advertising unless the service鈥檚 primary purpose is in fact clearly described as such by the company to the general public in its marketing and public communications.
- Companies should allow users to obtain all of their user information (collected and inferred) that the company holds, in a structured data format.
- Companies should delete all user information within a reasonable timeframe after users terminate their account or at the user鈥檚 request. This should be independently audited.
4. Restrict how companies are able to target users.听
- Companies should not enable advertisers to use their services to target specific individuals by using personally identifying information.
- Companies should not be allowed to enable advertisers to target users on the basis of any audience category or profile attribute without active user consent.
- Any ad targeting should only take place on the basis of information that is voluntarily disclosed by the user directly within the platform itself, otherwise known as first-party data. (For example, users might specify the language(s) in which they prefer to see ads, their broad geographic region, which sports teams they support, or self-select into broad audience categories including by subscribing to an advertiser鈥檚 updates.)
- Companies should ensure that the combination of the advertising content and the targeting category does not amount to discrimination on the basis of one or more protected classes recognized under civil rights law.听
Require that platforms maintain a public ad database to ensure compliance with all privacy and civil rights laws when engaging in ad targeting. Transparency is a prerequisite for accountability. Congress should pass the Honest Ads Act, expand its scope to include all types of online advertisements, thus mandating a universal database of advertisements, and enable regulators and researchers to audit it.
1. Pass the Honest Ads Act: Platforms should be required to maintain a 鈥減ublic file鈥 database with detailed information about the political ads they serve, similar to existing requirements for broadcast media, including a digital copy of the advertisement, a description of the audience the advertisement targets, the number of views generated, the dates and times of publication, the rates charged, and the contact information of the purchaser.
2. Expand the online advertising database to include all types of ads: Platforms should be required to expand the database that would be required by the Honest Ads Act to all online ads.
Require relevant disclosure and due diligence. Companies should be required to disclose information that demonstrates they are tracking the social impact of their targeted advertising and algorithmic systems, taking necessary steps to mitigate risk and prevent social harm.
1. Require targeted advertising revenue disclosure: All companies offering digital services and advertising should be required to disclose the percentage of revenue generated by targeted advertising.
2. Require ESG disclosures: Companies should be required to disclose non-financial information about their environmental, social, and governance (ESG) impacts, including information about the social impact of targeted advertising and algorithmic systems.听Reporting should be formal, systematic, and comparable. (See Part 1 for more detailed disclosure recommendations related to algorithmic systems.)
3. Require due diligence: Companies should be required to conduct assessments of their social impact and risks, including human rights risks associated with targeted advertising and algorithmic systems.
Strengthen corporate governance and oversight. In line with the ESG and due diligence disclosures recommended above, Congress should require that Securities and Exchange Commission (SEC) rules empower shareholders to hold company leadership accountable for social impact.
1. Require companies to phase out dual-class share structures: Once a reasonable but not excessive period of time has passed after a company鈥檚 IPO, dual class shares should be phased out.
2. Do not make it harder to file shareholder resolutions: The SEC should scrap proposed rule changes that will make it more difficult for shareholders to file proposals and to get them on proxy ballots.
The best way for companies to prepare for future regulation鈥攁nd more important, to demonstrate maximum respect for users鈥 human rights鈥攊s to align their policies, practices, and disclosures with the indicators outlined in the RDR Corporate Accountability Index methodology.2
Citations
- See the recommendations in Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington D.C.: 国产视频. source.
- The 2020 RDR Index methodology has been revised and the version we will use to evaluate companies in 2020 will be published in June. For the consultation draft of the new indicators see: . The indicators used for the 2019 RDR Index can be found at:
Conclusion
With a global pandemic in the run-up to a U.S. presidential election, it is now undeniable that strengthening platform accountability is critical to the future of democracy.
Despite intensifying efforts to remove potentially deadly misinformation and other dangerous speech, social media companies are still failing to effectively moderate content鈥攑aid and user-generated alike鈥攊n ways that are consistent with their human rights obligations and the protection of civil liberties. At the same time, policymakers have proposed holding these companies liable for their users' speech rather than for their targeted advertising business models, the fundamental source of the problem.
The threats have been identified. Now the question is: How are social media platforms preparing to counter them?
In the first part of this two-part series, we warned against recent proposals to revoke or dramatically revise Section 230 of the 1996 Communications Decency Act (CDA), which protects companies from liability for content posted by users on their platforms. Such a step would be counter-productive. Policymakers should instead focus on reining in social media companies鈥 targeted advertising business model and the algorithmic systems that drive it. These upstream processes, which require the collection and sharing of vast amounts of user data to target users without their clear knowledge and consent, are the driving force in the spread and discriminatory targeting of downstream disinformation, hate speech, and other content that can endanger both public health and democratic discourse.
At the end of the first report, we made specific recommendations for corporate transparency about social media platforms鈥 advertising content rules and ad-targeting systems, as well as their rules and processes for governing user-generated content. We called for greater corporate transparency about what happens under the hood, to enable an informed public debate about whether to regulate the algorithms themselves, and if so, how.1 If the companies will not make such disclosures voluntarily, Congress should mandate them.
In this second report, we have described how companies have failed to stop the downstream torrent of COVID-19 misinformation despite throwing extra resources at the problem and strengthening cooperation with fact-checkers and independent security researchers. Infodemics will continue to plague society鈥攁nd may get worse鈥攗nless and until companies make changes to the upstream systems that play a major role in driving them. If companies had made these changes voluntarily, we would have seen stricter and more responsible data use policies and practices, greater transparency, due diligence, and much stronger corporate oversight.
Strong privacy law is urgently needed to curb the negative social impact of targeted advertising business models. Regulation of political advertising must be upgraded for the era of online campaigning. Social media companies should be required to show evidence that they conduct tangible and credible due diligence around their social impacts and risks. They must disclose key categories of information necessary for external stakeholders to understand how companies are working to understand and address their risks. Corporate governance and oversight requirements must be strengthened. Shareholders must be empowered to hold companies accountable for their social impact. In turn, shareholders need to be sufficiently informed and engaged in order to understand the risks.
Time is running out for Congress to act in time to have an impact on social media鈥檚 next big test: the November 2020 general election. In addition to the potential harms to democratic discourse, misinformation can also undermine the democratic process. As states shift to the widespread use of absentee ballots in an effort to protect voters during COVID-19, experts warn of disinformation about how and when to vote.2 They also warn us to expect that reports of any mistakes or incompetence by local election officials will be taken out of context, conflated and combined with misinformation, and used in organized attempts by bad actors to destroy the legitimacy of the presidential election result, potentially throwing the country into political uncertainty and conflict.3
Commitments to prioritize the removal of election-related disinformation and misinformation, similar to commitments they have made to address the COVID-19 infodemic, would certainly be a start. But such steps will be insufficient鈥攆or all the reasons this report series has documented鈥攗nless more fundamental changes are made to the platforms鈥 algorithmic systems and targeted advertising mechanisms.
What if Facebook, Google, and Twitter committed to limit targeted advertising to the same demographic categories鈥攕uch as geographic area鈥攖o which print and broadcast advertisers have access?
What if they stopped allowing any advertisers to target individuals until after the November election?
What if they all agreed to stop all targeted advertising for three months prior to the election, offering contextual advertising only?
Such measures would dramatically reduce the flow and impact of election-related disinformation and misinformation on social media.
鈥淲hat if Facebook, Google, and Twitter stopped allowing any advertisers to target individuals until after the November election?鈥
If social media companies will not commit to a full moratorium on targeted advertising for a few months, they should nonetheless commit to take strong action that prioritizes the health of democracy over their 2020 financial returns. Facebook, Google, and Twitter should commission rapid impact assessments, in collaboration with independent experts on elections and social media, to identify the greatest threats posed by disinformation and misinformation to the 2020 presidential election. The assessments should then recommend concrete changes to their targeted advertising, data collection and use policies, and algorithmic systems that can be made in time to mitigate those threats.4 The results and recommendations should be made public, with enough time to implement a remedy.
The companies should then announce a plan for how they will change specific policies, advertising features, mechanisms, and privacy defaults in order to minimize the amplification, targeting, and overall impact of disinformation, misinformation, and other dangerous content in these final critical months leading up to the election.
While that may not solve the problem for the long term, it would be an important start to a broader national discussion about how social media platforms can best protect users鈥 rights and how our elected representatives and regulators can set effective rules for these companies to help safeguard our democracy.
Citations
- See the conclusion in Mar茅chal, Nathalie, and Ellery Roberts Biddle. 2020. It鈥檚 Not Just the Content, It鈥檚 the Business Model: Democracy鈥檚 Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: 国产视频. source
- Feldman, Max. 2020. Dirty Tricks: Eight Falsehoods That Could Undermine the 2020 Election. Brennan Center for Justice. (May 16, 2020).
- Hasen, Richard L. 2020. 鈥淲hat Happens in November If One Side Doesn鈥檛 Accept the Election Results?鈥 Slate. (May 16, 2020).
- For an example of a rapid due diligence tool see Allison-Hope, Dunstan, and Jenny Vaughan. 2020. 鈥淐OVID-19: A Rapid Human Rights Due Diligence Tool for Companies.鈥 BSR. (May 16, 2020).