Lilian Coral
Vice President, Technology & Democracy, 国产视频
The spotlight on AI-manipulated media just got a whole lot bigger. While an AI-generated robocall discouraged voters to turn out across New Hampshire, of Taylor Swift circulated on X, formerly known as Twitter. These events not only highlight why it鈥檚 urgent for Congress to tackle AI鈥檚 use in disinformation, they also revealed creative and effective ways to combat the proliferation of deepfakes in the meantime.
AI鈥檚 potential impact on our access to trustworthy information and how it amplifies the reproduction of non-consensual sexual images at unprecedented scales are well documented. Even before videos of Nancy Pelosi emerged in 2019 , there was already concern about the wide availability of tools that could manipulate the media of public figures. As far back as 2013, scholars like Safiya Noble showed how women of color specifically have been the through algorithms that reinforce sexualized notions and images in its search results. This sexual exploitation can now be bolstered by 鈥攁nd the technology is only getting better.
The key to countering the disinformation that spread this past week was that when the public鈥攁nd in particular the well-known Swifties鈥攚as confronted with this reality, Taylor Swift鈥檚 supporters sprang to action. They did so in ways that can teach us a lot about digital empowerment; the importance of the 鈥渃rowd鈥 in surfacing, combatting, and eliminating this content; and the power of mass mobilization in pushing tech companies and lawmakers to act.
Perhaps it鈥檚 the humans鈥攏ot the machines鈥攖hat will save us from a future in which misinformation or disinformation victimizes and erodes all sense of trust and safety.
Much of last year鈥檚 conversation on AI centered around the about the technology鈥檚 societal risks. What ensued was a year of convenings by global leaders on how to best address these risks and, in theory, regulate the development of AI to prevent them. In the end, a mix of regulatory and voluntary efforts have left us in limbo.
Both the EU through its and the United States through a have taken some measured steps to begin regulating and harnessing the power of AI. Tech companies have agreed to through efforts from the Biden-Harris administration. But the truth is that none of these things could prevent what happened this week to Joe Biden and Taylor Swift. And so, where do we go from here? What can we take forward from all of this?
While the internet took note of the Joe Biden deepfake, it was abuzz with the creation of digitally faked sexually explicit images of Taylor Swift. Undoubtedly, the nature of the content drove much of the attention, but the pop star鈥檚 powerful fanbase, the Swifties, also helped. Known for their devotion, high engagement levels, and creativity, Swifties went on the offensive to . The show that her knowledge of her fans, personal interactions, and ongoing digital engagement through coded messages, or , has enabled her to cultivate and grow her online following. This form of digital community building impacted the response to the fake pornographic images of her that emerged.
The internet, and AI, is driven by the crowd. And this crowd did what was right for this situation. In addition to finding and identifying the source of the content, Swifties rallied to flood X with #ProtectTaylorSwift tweets that essentially drowned the images out. At the same time, this raised the visibility needed to force companies like X and Microsoft to take action. X then took the step to block searches in order of the images.
The reality is that this type of concerted action can鈥檛 be expected for every instance of non-consensual sexual images. And we can鈥檛 expect that X, or other sites, will cut off searches swiftly in the future. It鈥檚 also unfortunate that it took the likes of Taylor Swift being victimized in this way to elevate the issue and even force the White House to make a on the matter. Lawmakers will also need to act to curb the use of AI in creating and disseminating non-consensual sexual images. This wouldn鈥檛 be the first time that . But in the meantime, we can develop strategies and tools to do as the #Swifties do: Find the source, drown out the content, and raise visibility to drive action from large social media platforms that can鈥檛 or won鈥檛 take other steps to curb this content voluntarily.
This moment shows that more can be done, and that perhaps it鈥檚 the humans鈥攏ot the machines鈥攖hat will save us from a future in which misinformation or disinformation victimizes and erodes all sense of trust and safety. We must not only broaden engagement around AI, it鈥檚 critical that we invest in technology solutions that strengthen crowd-sourced identification and quicker responses to unsolicited sexual content. Alas, this story is a reminder that, even with the imperfect online tools available to us, mass mobilization that protects people, pressures companies, and shores up trust and safety is still possible. And we need to see more of it in service of all, especially the less powerful among us.