国产视频

In Short

Artificial Intelligence Owes You an Explanation

artificial intelligence explanation
Flickr Creative Commons / NDB Photos

My family has grown very attached to our听, particularly for music. We can access Prime Music by asking Alexa for an artist, song, or station. Even my young kids can navigate the verbal interface to request 鈥淐an鈥檛 Fight the Feeling鈥 from the movie听听or the soundtrack to听.

As part of the smart speaker鈥檚 artificial intelligence, the program picks up on our tastes and preferences, so when we simply say 鈥淎lexa, play,鈥 the device will queue up suggested tracks. In theory, what it picks should have some obvious relationship to music we chose ourselves. And songs it selects usually do. Usually.

But recently, Alexa considered our diet of kids鈥 music, show tunes, the Beatles, the Rat Pack, and Pink Martini, and decided to cue up 鈥 Sir Mix-a-Lot.

After we stopped laughing, I wanted desperately to ask, 鈥淎lexa,听why?鈥澨齱as Alexa thinking when it determined that we needed to listen to a one-hit wonder hip-hop artist from the 1990s?

Sadly, Alexa currently isn鈥檛 built to provide answers to such pressing questions about its judgment. Scores of current and potential autonomous devices and A.I.-powered programs鈥攊ncluding personal A.I. assistants like Alexa, self-driving cars, chatbots, and smart appliances that learn our preferences鈥攑rovide little to no transparency for their decisions, which are made with no direct human control or input. Whether through user error, poor design, profit-based manufacturer decisions, or any number of factors, technology can make suspect decisions. Remember how Google Home sometimes spouts听, or how听? And as A.I. programs and autonomous devices continue to expand into decisions that have more serious consequences, including those听听,听,听, and even听, the stakes become much higher than an out-of-place rap tribute to the backside.

Enter the right to an explanation, a movement to combat the broad move to a 鈥渂lack box society鈥濃攁 culture that largely accepts we have no way to understand how technology makes many basic decision for us, like when self-driving cars choose particular routes home or autonomous shopping assistants generate your grocery lists. As you can probably guess, the right to an explanation would require that autonomous devices and programs tell consumers how the A.I. reached a decision: Why did you play that song? Why did you get off the highway?听?

This emerging right is another form of algorithmic transparency, which seeks to ensure that algorithms consumers interact with do not enable discrimination, exert hidden political pressures, and engage in other unfair or illegal business practices. Many of those efforts are focused on public policy. For example, the Federal Trade Commission鈥檚听conducts independent studies and provides training and technical expertise to FTC consumer protection investigators and attorneys related to algorithmic transparency. The right to an explanation is focused on providing consumers with personalized, easy to understand algorithmic transparency.

One of the most prominent moves in the direction of the right to an explanation comes from the European Union. In 2016, the European Parliament and the Council of the European Union adopted the听鈥攁 new data protection regime that promises to usher in major changes to how companies handle the personal data they gather about EU-based consumers. It鈥檚鈥攜ou have to get through 173 nonbinding perambulatory paragraphs before you even get to the regulation itself. But once you do, you鈥檒l find several new rules directly responding to the question of how artificial intelligence technologies, like Amazon鈥檚 Alexa, should be allowed to access and use personal data. Among the most noteworthy: When companies collect personal data related to their consumers, they are required to inform individuals whether 鈥渁utomated decision-making, including profiling鈥 is involved in processing that data and provide them with 鈥渕eaningful information about the logic involved鈥 with that processing.

In other words, come the May 2018 deadline when the regulations kick in, if you鈥檙e in the EU, A.I. owes you an explanation every time it uses your personal data to choose a particular recommendation or action. Forcing the A.I. to explain its decisions, advocates say, could provide an important check on unintentional or unsavory algorithmic bias. It would put consumers in a better position to evaluate and potentially correct these kinds of decisions and prevent companies fearful of embarrassment or legal action from allowing inappropriate bias in A.I. decisions. It would also give consumers the opportunity to see how their personal data is used to generate results.

Sounds great, right? But it鈥檚 not yet clear exactly what would happen after I ask my Echo, 鈥淎lexa, why did you just play 鈥楤aby Got Back鈥?鈥 The General Data Protection Regulation does not provide any specific format or content requirements, leaving experts in the field to make educated guesses and recommendations. For example, Bryce Goodman from the Oxford Internet Institute and Seth Flaxman from the University of Oxford听听that 鈥渁ny adequate explanation would, at a minimum, provide an account of how input features relate to predictions, allowing one to answer questions such as: Is the model more or less likely to recommend a loan if the applicant is a minority? Which features play the largest role in prediction?鈥

Of course, the usefulness of any such disclosure depends on the ability of the A.I. to appropriately analyze personal data and give the consumer choices based on it. Some people think that cannot be done the way businesses treat data now. John Havens, executive director of the听, argues that consumers should be able to review their personal data and how A.I. relies on it, functions that are anathema to how businesses buy and sell data today. He also believes that you should be able to provide guidance to A.I., correcting mistaken data and telling it which personal data is more important than others.

So with a self-driving car, the right to an explanation might look something like this: Upon purchase, the car would ask for basic user information (home location, age, sex, etc.) and offer the user a menu of options to prioritize: speed, scenery, preference for certain types of driving (urban, highway), avoiding traffic, etc. Based on those responses, the car would have a better idea of the personal data to rely on when making decisions and could use the preset priorities to answer questions like, 鈥淲hy did you get off the highway an exit early?鈥 During the lifetime of the car, the user could review the personal data collected by the car and adjust the preset priorities.

In this way, the right to an explanation also functions as a data protection tool much more powerful than rules currently on the books in the United States, such as laws that require individuals to consent before a third party can use or disclose their data and that require businesses to notify users if. Both ideas affirmatively grant each person greater control of his or her data. But the right to an explanation lets you see how the information you鈥檙e handing over is used in context and can be used to grant you greater control of what you want to input and how it鈥檚 processed.

The right to an explanation is not without tradeoffs. Thomas Burri, an assistant professor of international and European law at the University of St. Gallen, told me via Skype that though he believes there should be something like these required disclosures, he鈥檚 concerned that the requirements could go too far, hindering and infringing on the rights of developers.

鈥淚f the first thing you need to consider when designing a new program is the explanation, does that stifle innovation and development?鈥 he said. 鈥淪ome decisions are hard to explain, even with full access to the algorithm. Trade secrets and intellectual property could be at stake.鈥 The fear is that broad language creating the right to an explanation could discourage companies, entrepreneurs, and developers from fully exploring the possibilities of A.I. because they don鈥檛 want to reveal sensitive information or navigate the complexities of complying with ambiguous requirements.

In the U.S., such broad language and potentially onerous requirements鈥攑lus听鈥攎ake it unlikely that Congress or state legislatures would adopt a similar right to an explanation anytime soon. But that doesn鈥檛 mean we American users shouldn鈥檛 lobby for it.

As A.I. and autonomous devices become more adept at interpreting our personal data, and our reliance on those devices increases, it becomes more important for every user to know what that personal data is and how it鈥檚 being used. Policymakers can work with experts in the field to draft balanced regulations that give companies enough room to innovate and give consumers the kind of meaningful control that ensures they can鈥檛 use our personal data in objectionable, dangerous, or discriminatory ways.

And it would also help explain how Sir Mix-a-Lot was cued up for my family. Amazon, I understand Alexa doesn鈥檛 operate under the right to an explanation, but call me if you have one.

This is part of听, a collaboration among听,听国产视频, and听. Future Tense explores the ways emerging technologies affect society, policy, and culture.

More 国产视频 the Authors

John Frank Weaver
Artificial Intelligence Owes You an Explanation