国产视频

In Short

Ethics Alone Can’t Fix Big Tech

AI
Phonlamai Photo / Shutterstock.com

This article in , a collaboration among , , and .

The New York Times has what some have long suspected: The Chinese government is using a 鈥渧ast, secret system鈥 of artificial intelligence and facial recognition technology to identify and track Uighurs鈥攁 Muslim minority, 1 million of whom are being held in detention camps in China鈥檚 northwest Xinjiang province. This technology allows the government to extend its control of the Uighur population across the country.

It may seem difficult to imagine a similar scenario in the U.S., but related technologies, built by Amazon, are by U.S. law enforcement agencies to identify suspects in photos and video. And echoes of China鈥檚 system can be heard in plans to deploy these technologies at the .

A.I. systems also decide on social media, and for goods and services. They for fraud, , and . A.I.-driven recommendations help determine and how judges make

As our lives intertwine with A.I., researchers, policymakers, and activists are trying to figure out how to ensure that these systems reflect and respect important human values, like privacy, autonomy, and fairness. Such questions are at the heart of what is often called 鈥淎.I. ethics鈥 (or sometimes 鈥渄ata ethics鈥 or 鈥渢ech ethics鈥). Experts have been discussing these issues , but recently鈥攆ollowing high-profile scandals, such as and the鈥攖hey have burst into the public sphere. The European Commission released draft 鈥.鈥 Technology companies are rushing to prove their ethics bona fides: announced 鈥淎I Principles鈥 to guide internal research and development, hired a 鈥渃hief ethical and humane use officer,鈥 and rolled out鈥攁nd then, facing intense criticism, 鈥攁n ethics advisory board. In academia, computer and information science departments are starting to require that their majors take ethics courses, and research centers like Stanford鈥檚 new and public-private initiatives like the are sprouting up to coordinate and fund research into the social and ethical implications of emerging A.I. technologies.

Experts have been trying to draw attention to these issues for a long time, so it鈥檚 good to see the message begin to resonate. But many experts also worry that these efforts are largely designed to fail. Lists of 鈥渆thical principles鈥 are intentionally too vague to be effective, Ethics education is being substituted for. Company ethics boards offer 鈥渁dvice鈥 rather than meaningful oversight. The result is 鈥溾濃攐r worse, 鈥溾濃攁 veneer of concern for the greater good, engineered to pacify critics and divert public attention away from what鈥檚 really going on inside the A.I. sausage factories.

As someone working in A.I. ethics, I share these worries. And I agree with many of the suggestions others have put forward for how to address them. Kate Crawford, co-founder of NYU鈥檚 AI Now Institute, that the fundamental problem with these approaches is their reliance on corporate self-policing and suggests moving toward external oversight instead. University of Washington professor Anna Lauren Hoffmann agrees but that there are plenty of people inside the big tech companies to pressure their employers to build technology for good. She argues we ought to work to empower them. Others have to the importance of transparency and diversity in ethics-related initiatives, and to the promise of more to technology design.

At a deeper level, these issues highlight problems with the way we鈥檝e been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists鈥 own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren鈥檛 brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands , is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building 鈥済ood technologies鈥 have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

Moreover, in failing to recognize the proper scope of ethical theory, we lose our grasp of ethical practice. It should come as no surprise that ethics alone hasn鈥檛 transformed technology for the good. Ethicists will be the first to tell you that knowing the difference between good and bad is rarely enough, in itself, to incline us to the former. (We learn this whenever we teach ethics courses.) Acting ethically is hard. We face constant countervailing pressures, and there is always the risk we鈥檒l get it wrong. Unless we acknowledge that, we leave room for the tech industry to turn ethics into 鈥渆thics theater鈥濃攖he vague checklists and principles, powerless ethics officers, and toothless advisory boards, designed to save face, avoid change, and evade liability.

Ethics requires more than rote compliance. And it鈥檚 important to remember that industry can reduce any strategy to theater. Simply won鈥檛 solve these problems, since they are equally (if not more) susceptible to . Many are rightly excited about for state and federal privacy legislation, and for laws constraining , but we鈥檙e already seeing industry lobbying to of their most meaningful provisions. More importantly, law and policy evolve too slowly to keep up with the latest challenges technology throws at us, as is evident from the fact that most existing federal privacy legislation is

The way forward is to see these strategies as complementary, each offering distinctive and necessary tools for steering new and emerging technologies toward shared ends. The task is fitting them together.

By its very nature ethics is idealistic. The purpose of ethical reflection is to understand how we ought to live鈥攚hich principles should drive us and which rules should constrain us. However, it is more or less indifferent to the vagaries of market forces and political winds. To oversimplify: Ethics can provide blueprints for good tech, but it can鈥檛 implement them. In contrast, law and policy are creatures of the here and now. They aim to shape the future, but they are subject to the brute realities鈥攕ocial, political, economic, historical鈥攆rom which they emerge. What they lack in idealism, though, is made up for in effectiveness. Unlike ethics, law and policy are backed by the coercive force of the state.

Taken together, this means we need new laws to place hard constraints on how A.I. is used and policy to drive more flexible external oversight. Ethics research should be a lodestar for these efforts, articulating clear goals to strive for and rigorous standards against which to judge our progress. Simultaneously, ethics education should work from the inside, guiding technologists as they imagine future tools and bring them into the world.

And what of ethics boards? The purpose of ethics boards鈥攁s well as chief ethics officers, internal 鈥淎I principles,鈥 and so on鈥攕hould be to raise awareness and drive self-criticism. They don鈥檛 need power; that鈥檚 the law鈥檚 instrument. What they need is respect and influence. So far they鈥檝e lacked that, but they can earn it if their own organizations follow their advice, and if they鈥檙e staffed with If that happens, ethics boards can be more than moral cover. They can serve as a conscience for the tech industry, steering it toward the good (or at least, away from evil) from within.

More 国产视频 the Authors

Daniel Susser
Ethics Alone Can’t Fix Big Tech