Alexa, What Is a Conflict of Interest?
This article in , a collaboration among , , and .
鈥淭hat sounded like a tricky conversation, John. Shall I play some Black Eyed Peas to cheer you up?鈥 As John鈥檚 mood lifts鈥攈e knows the Black Eyed Peas aren鈥檛 cool, but as his digital assistant knows, he has a weakness for them鈥攈is digital assistant continues: 鈥淛ohn, shall I schedule a test drive for the car you鈥檝e been looking at lately?鈥
It鈥檚 a scenario that could happen in the near future. Whether Alexa or Google Assistant wins the battle to impregnate our homes with their artificial intelligence, we humans will develop personal and emotional relationships with our new gadgets. That will spawn a vast new conflict of interest: the dual roles of companion and sales associate, the one-two punch of fulfilling emotional needs that ripen us up for commercial appeals.
The business models for the leading digital assistants rest on e-commerce and advertising. The A.I. will learn from billions of conversations to create powerful new persuasive methods. You might remember how Google鈥檚 A.I. beat the Go champion and developed invented 鈥溾 new strategies. Moreover, it鈥檚 not really one digital assistant鈥攊t will be personalized to hundreds of millions of people. That scale and speed, plus A.I.鈥檚 inherent opacity, leave almost no chance for human oversight and control. The A.I. will turn the digital assistants into armies of super salespeople exploiting the emotional relationships built with their human owners.
But don鈥檛 just despair at yet another potential dystopian A.I. future. Conflicts of interest also exist in the human-only world, and although they are often tough to address鈥攍ook at the of Obama-era regulations to tackle financial institutions combining ) and brokerage services (acting to benefit themselves)鈥攄iverse examples can inspire public policy. To address this fundamentally economic challenge, we need to understand how it鈥檚 shaped by technology and human psychology鈥攁nd if industry fails to self-regulate, then government must step in.
Within four years, forecast that digital assistants will outnumber people. We have talked to our computers for a long time, mostly in anger and frustration. Now computers talk back鈥攁nd they鈥檙e quite friendly and helpful. We increasingly depend on them, even for .
Therein lies the danger. Humans build relationships with other personalities and we anthropomorphize. Digital assistants will be another in our range of relationships with co-workers, neighbors, friends, and family. But this personality will accompany us from first waking to at last falling asleep. Marketeers understand all this and deliberately craft their digital assistants鈥 personalities: carefully how we鈥檒l perceive the as funny, likable, or . Google Assistant鈥檚 team people with diverse backgrounds in scriptwriting, Pixar storyboarding, copywriting, and stand-up.
A.I. research dedicates a lot of resources to detecting emotions. Woebot, an A.I.-powered chatbot for mental health, detects emotions and applies cognitive behavioral therapy. Future digital assistants will learn from your gait approaching the front door, your face鈥檚 expression peering into the security camera, and your voice adjusting the lights. It will have you at 鈥渉ello鈥濃攂ut you won鈥檛 think of it as 鈥渋t鈥 so much as 鈥渟he.鈥 And that includes your kids. Amazon鈥檚 鈥淢agic Word鈥 and Google鈥檚 鈥淧retty Please鈥 features encourage children鈥檚 through positive reinforcement, but they also encourage kids to think of the assistant as a person to whom they should be polite.
So, say John agrees to look at a car after his bad day. Seemingly unrelated features of his data helped predict his susceptibility. Months of subtle nudges鈥攐ften at optimally chosen vulnerable moments鈥攏ourished his desires. Now his digital assistant reminds him what the Johnsons drive, that his mother in-law would be very proud to see her daughter in the big new car, and that his son would love a specific feature. The assistant has already found an attractive car loan and calculated that it鈥檚 doable. Soon, he owns a new car鈥攁nd a new monthly payment鈥攂ecause the A.I. built a relationship, nurtured a desire, and knew when to strike.
Could human supervisors simply monitor the A.I. to identify and prevent potentially destructive behavior? No. The A.I.鈥檚 strategy is opaque, being not simply code but essentially behavior learned by . Any human supervisor would have to monitor thousands of interactions over months between an 鈥渙wner鈥 and assistant. Moreover, human supervisors will depend on the A.I. itself to flag the A.I.鈥檚 potential abuses鈥攁nd even if a problem is found, anticipating the results of adjusting an objective function is devilishly tricky. And importantly, addressing big tech鈥檚 monopoly or oligopoly power won鈥檛 fix this challenge. If multiple financial firms that each combine brokerage and advisory services are competing furiously, that doesn鈥檛 remove the conflict of interest in each firm.
How can we manage this emerging 21st-century conflict of interest? Consider three scenarios.
In the first鈥self-regulation鈥companies like Google and Amazon recognize the conflict of interest and decouple their digital assistants from their e-commerce business activities. They may create independent subsidiaries or even spin them out. Moreover, companies less tied to such an e-commerce model, like Samsung or Apple, might enhance market share.
Realistically, however, historical precedent suggests self-regulation is unlikely to work well: Look at finance or children鈥檚 junk food advertising. Also, companies like Amazon, whose cut-price digital assistants dominate market share, will accrete more data, providing a technical edge. Furthermore, could Samsung or Apple compete without eventually exploiting e-commerce?
In a second scenario鈥攍ibertarian鈥攃ompanies press on to put armies of gold-digging friends into our living rooms and children鈥檚 bedrooms, while regulators stay passive. Unthinking techno-optimism hasn鈥檛, however, always served society well.
Which brings us to the third scenario: regulation. Regulators can anticipate these challenges, monitor their evolution, and when necessary, act to minimize the conflict of interest. Digital assistants are still evolving, so we have time to develop creative solutions. In the long run it also helps corporations by providing a stable business environment.
Regulations protecting vulnerable groups like children are most politically feasible. Various countries constrain advertising to children. Previous media technologies also sparked broader child protection legislation. The U.S. regulated some advertising aimed at kids on and, since 2000, requires digital companies to obtain to collect identifiable data from those younger than 13. Unfortunately, when digital assistants emerged in 2017, the Federal Trade Commission to allow data collection for many voice commands, a particular problem as Amazon can recognize a household鈥檚 voices.
What about regulation to help John, who鈥檚 just had a hard day at work? An idea widely adopted in early-20th-century America has recently found and interest: treating platforms as utilities. For digital assistants, the aim would be clearly delineating when its activities contribute to being a trusted assistant and when they contribute to marketing objectives. How?
One option is simply spinning out Google and Amazon鈥檚 digital assistant subsidiaries. Alternatively, the digital assistant could contain two separate A.I.s, so John could build relationships with two A.I.s that have very different goals and observed personalities: 鈥淎lexa鈥 becomes salesperson 鈥淛eff鈥 and buddy 鈥淎nnabel.鈥 In this new voice medium, we should have the right to know which personality we speak with. An analogy already exists online: When you search normally, Google clearly labels some results as advertisements while others are commercially unbiased. Another option could require that the basic digital assistant can operate with A.I.s from different software designers (who could access key data enabling a level playing field) from which John could choose.
鈥淭hat sounded like a tricky conversation, John. Shall I play some Black Eyed Peas to cheer you up?鈥 As John鈥檚 mood lifts鈥攈is digital adviser Annabel knows his weakness for the Black Eyed Peas鈥攈is digital assistant continues: 鈥淛ohn, you haven鈥檛 spoken to your brother for a while, shall I put a call through?鈥 A little while later he asks his digital salesman Jeff to order some takeout. Salesman Jeff then asks something about scheduling a test drive for a fancy new car. But John isn鈥檛 really listening and just asks Annabel to play another track.