国产视频

Introduction

Tay鈥檚 first words were 鈥渉ellooooooo world!!!鈥 It was a friendly start for the Twitter bot designed by Microsoft to engage with people aged 18 to 24. But, in a mere 12 hours, Tay went from friendly Twitter persona to foul-mouthed, racist Holocaust denier who said feminists 鈥渟hould all die and burn in hell鈥 and that the actor 鈥渞icky gervais learned totalitarianism from adolf hitler, the inventor of atheism.鈥

Tay, which Microsoft quickly shut down after just 24 hours, was programmed to learn from the behaviors of other Twitter users, and in that regard, was a success. The bot鈥檚 embrace of humanity鈥檚 worst attributes is an example of algorithmic bias鈥攚hen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay tweet

The side effects of unintentionally discriminatory algorithms can be dramatic and harmful. Companies and government institutions that use data need to pay attention to the unconscious and institutional biases that seep into their results. It doesn鈥檛 take active prejudice to produce skewed results in web searches, data-driven home loan decisions, or facial recognition software. It just takes distorted data that no one notices and corrects for.

As we begin to create artificial intelligence (AI), we risk inserting racism and other prejudices into the code that will make decisions for years to come.

At Slant: Understanding Algorithmic Bias, in San Francisco, 国产视频 CA brought together a curated group of experts in AI, bias, technology, and future thinking to outline the state AI and ethics from four different perspectives. Our goals were understand what specific actions companies are taking to address bias in AI and machine learning and what help they can use from civil society. We鈥檙e bringing you our most actionable insights.

To learn more about these recommendations and this work at 国产视频, please contact Megan Garcia (garcia@newamerica.org).

Big ideas to make everyone better at heading off bias in algorithms and machine learning

  • An understanding of the need for of algorithms and AI would go a long way toward tackling many of the ethics problems we see with both.
  • Inside companies and civil society there is the appetite to make algorithms and AI more ethical, but no consensus about how to do that. Efforts are also disconnected and we are not always learning from each other. is an exception.
  • There will have to be a change in the way companies think about bias to ensure that bias in algorithms and AI is minimized. Eventually there will have to be an inclusion mindset (or some company-wide or industry-wide focus on ethics and inclusion) that forces individuals and processes to incorporate thinking about bias into correcting for biases as they emerge.
  • and other fields with regulated or unregulated codes of ethics offer lessons that the technology sector might apply.
  • There is dire need for organizations outside of the technology sector to provide ideas about how to address algorithmic bias. Some organizations have made modest progress in this area to train people to use data ethically, but much remains to be done.
  • Who to talk to for more: 国产视频 and 国产视频 CA, , , , , , (Fairness, Accountability and Transparency in Machine Learning) community,,

What's Your Role?

Select your role from the ones below and see what you can do to address bias in AI and machine learning.

Table of Contents

Close