国产视频

In Short

The Automated Administrative State

cc1.jpeg

This is part of The Ethical Machine: Big ideas for designing fairer AI and algorithms, an on-going series about AI and ethics, curated by Dipayan Ghosh, a former Public Interest Technology fellow. You can see the full series on the .

DANIELLE CITRON
Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law

RYAN CALO
Lane Powell & D. Wayne Gittinger Endowed Professorship and associate professor of law, University of Washington

The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us 鈥溾濃 understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the 鈥渘o-fly鈥 list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system鈥檚 decisions impossible. Because programmers working at private contractors lacked training in the law, they when translating it into code [2].

Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do.

Some of us in the academy the as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called 鈥渂ureaucratic justice鈥 in the form of efficiency with a 鈥渉uman face鈥 feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning.

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts [8]. Any sufficiently transformative technology is double-edged. On the one hand, it raises legal concerns about its potential to undermine law鈥檚 formal promises, including due process. On the other, as far less appreciated, transformative technologies invite us to take inventory of the law鈥檚 aspirational goals that are not being met.

For instance, we are using machine learning to displace the important role of a human arbiter, while leaving on the table some of its greatest advantages. Is it not remarkable, for example, that litigants who do not speak English must wait on agencies to find translators when multiple companies offer free real-time translation apps? [9] Could the same algorithms that purport to predict citizen behavior be used to organize an administrative law judge鈥檚 docket more efficiently? Could they be used to allocate funding for childcare to relieve burdens borne by poor mothers and thus prevent stress that might precipitate child abuse? Could they identify training needed for teachers to shore up their expertise rather than leading to their firing?

Third, in practice the values and commitments of the laws in jeopardy only expand. Recent critiques surface the extent to which algorithms reinforce inequality. The gains and benefits of artificial intelligence seem [10]. Uber will benefit from fleets of vehicles; its drivers may not. Law and legal theory needs further development to ensure that the vulnerable are equally able to pursue life鈥檚 crucial opportunities鈥攖o work, parent, attend school, and far more.

The history and present of the administrative state鈥檚 addiction to automation suggest a need to make fresh choices regarding the future. We are excited to be a part of the expanding academic, civil society, and industry community dedicated to preserving important legal, ethical, and dignitary safeguards while harnessing the affordances of new technology to promote human flourishing.

Endnotes

  1. Danielle Keats Citron, 鈥淏ig Data Should Be Governed by Technological Due Process,鈥 The New York Times, July 26, 2016, https://www.nytimes.com/roomfordebate/2014/08/06/is-big-data-spreading-inequality/big-data-should-be-regulated-by-technological-due-process.
  2. Danielle Keats Citron, 鈥淭echnological Due Process,鈥 Washington University Law Review 85, no. 6 (2008), https://openscholarship.wustl.edu/law_lawreview/vol85/iss6/2/.
  3. Ibid.; Danielle Keats Citron, 鈥淥pen Code Governance,鈥 University of Chicago Legal Forum 355 (2008), available at https://digitalcommons.law.umaryland.edu/fac_pubs/511/; Paul M. Schwartz, 鈥淒ata Processing and Government Administration: The Failure of the American Legal Response to the Computer,鈥 Berkeley Law Scholarship Repository 43 Hastings L. J. 1322 (1991), https://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=1987&context=facpubs.
  4. Jerry L. Mashaw, Bureaucratic Justice (New Haven: Yale University Press, 1983), cited in a related context by Paul M. Schwartz, 鈥淒ata Processing and Government Administration.鈥
  5. Colin Lecher, 鈥淲hat Happens When an Algorithm Cuts Your Medicare,鈥 The Verge, March 21, 2018, https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy.
  6. Virginia Eubanks, Automating Inequality (New York: St. Martin鈥檚 Press, 2017).
  7. Cathy O鈥橬eil, Weapons of Math Destruction (New York: Broadway Books, 2016); Frank Pasquale, The Black Box Society (Cambridge: Harvard University Press, 2014); Joshua Kroll et al., 鈥淎ccountable Algorithms,鈥 University of Pennsylvania Law Review 165, no. 3 (2017), https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3/.
  8. A noted exception is Andrew Ferguson鈥檚 new book called The Rise of Big Data Policing, which calls for the use of Big Data for pro-social and non-punitive ends.
  9. Justice Cu茅llar of the California Supreme Court makes a similar point on the promise of AI-aided translation. See Mariano-Florentino Cu茅llar, A Simpler World? On Pruning Risks and Harvesting Fruits in an Orchard of Whispering Algorithms, UC Davis Law Review 51, no. 27 (2017). Of course, there are dangers in AI translation, especially in high-stakes contexts. AI translation can reproduce bias and there have been high-profile mistakes.
  10. Kate Crawford and Ryan Calo, 鈥淭here Is a Blind Spot in AI Research,鈥 Nature 538, no. 7625 (October 20, 2016): 311鈥318, https://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805.

More 国产视频 the Authors

Ryan Calo
Danielle Citron
The Automated Administrative State