国产视频

Podcast

Episode 12: How Governments Use AI

Examining what AI means for how we鈥檙e governed

Where is AI already shaping government decisions, and how might it be used in the future? Lilian Coral of 国产视频 and of the Bezos Earth Fund explore the past, present, and future of how governments use AI.

Listen to this podcast on and

Transcript

Lilian Coral: AI is not a human, and we live in a world in which, you know, you want to hold someone accountable for a decision. And so I think there鈥檚 still a lack of clarity about who is ultimately accountable for some of the decisions these systems are making.

Shannon Lynch: According to the Pew Research Center, 52 percent of Americans say they鈥檙e more concerned than excited about the growing role of AI in daily life. As our government begins to harness this technology, should we be worried? Or hopeful? Or maybe a little bit of both?

Welcome to Democracy Deciphered, the podcast that analyzes the past, present, and future of American democracy. I鈥檓 your host, Shannon Lynch. Today, I鈥檓 joined by Amen Ra Mashariki and Lilian Coral to explore how AI is shaping the way our government operates.

Lilian Coral is the Vice President of the Technology and Democracy Programs and Head of the Open Technology Institute at 国产视频.

Before joining 国产视频, Coral served as Director of National Strategy and Technology Innovation at the Knight Foundation, where she developed a citizen-centered smart city strategy and managed over $55 million in investments across more than 120 grantees. Previously, she was the chief data officer for Los Angeles Mayor Eric Garcetti. There, she led efforts to expand the city鈥檚 open data program to over 1,100 data sets, introduced data science into city operations, and developed digital services.

With nearly 20 years of experience, Coral has worked across sectors, including labor, NGOs, philanthropy, and all levels of government, to transform how public institutions use data and technology to serve communities.

She holds a bachelor鈥檚 degree in international studies from the University of California, Irvine, and a master鈥檚 degree in public policy from UCLA.

Also with us today is . Mashariki is the Director of AI and Data Strategies at the Bezos Earth Fund, where he leads investments and grants that harness AI for impactful climate and environmental solutions.

Previously, he was a senior principal scientist at NVIDIA, advising nations on AI strategies and tech development. He also taught at NYU鈥檚 Center for Urban Science and Progress and was a fellow at Harvard Kennedy School鈥檚 Ash Center. Mashariki made history as New York City鈥檚 first Chief Analytics Officer, using data to improve public services under Mayor Bill de Blasio. Appointed a White House Fellow by President Barack Obama, he later served as Chief Technology Officer at the U.S. Office of Personnel Management.

He holds a PhD in engineering from Morgan State University, a master鈥檚 in computer science from Howard University, and a bachelor鈥檚 in Computer Science from Lincoln University.

Amen, Lilian, thank you so much for joining me.

Lilian Coral: Thanks for setting up this discussion.

Amen Ra Mashariki: Thank you for having us.

Shannon Lynch: So, to get us started, how would you explain artificial intelligence to someone with no technical background?

Amen Ra Mashariki: Without going into technical terms, the broad way I would describe artificial intelligence is that it uses data, compute, and algorithms in order to detect patterns in that specific data in order to either predict things, identify things, or suggest, essentially, simulate things. I think that鈥檚 the quick and dirty answer I would give.

Editorially, is that AI is a capability that gives us the opportunity to innovate. What鈥檚 most impressive about AI is what you can do with AI. Think telescope. When telescopes were invented, we didn鈥檛 go around and have conferences and conversations around telescopes. We had conversations and people wrote about what you could do with it, what you could see, what you could experience. And that鈥檚 what changed the world was our ability to look beyond Earth and understand things.

Lilian Coral: Yeah. And I like that. I like the fact that you鈥檙e emphasizing the opportunity to innovate, that it gives us the opportunity innovate. I think the only thing I would add for kind of like just general public context, right, is that we鈥檝e had AI artificial intelligence for a long time, right? I mean, this is sort of like a process that's been innovating and evolving. And at the heart of it, it鈥檚 just really trying to leverage all this compute, all this data to do tasks that we would normally do, right? To like almost routinize those tasks. And then eventually it got to more sophisticated things like can the computer recognize faces? Can it understand language? Then it鈥檚 almost like, can it predict what Lilian is going to do based on all of the times that it鈥檚 been learning from me?

Now, what I think has really captured the public imagination is this generative AI component, which is that it is no longer just learning from me. It has essentially learned all about us, quote unquote, in theory, and now it is generating completely new things, right? So when your phone suggests to you what word you鈥檙e trying to type, and you start typing, right, and then it suggests like, oh, I think she's going to type 鈥渁nd,鈥 that's AI. But when ChatGPT is writing a summary of the meeting we just had, or is actually drafting an entire email from a prompt that I give it, from a request that I give it, then that鈥檚 generative AI. And I think to Amen鈥檚 point about the opportunity to really innovate, the capability to really innovate, that鈥檚 where many of us can get excited by it. And obviously, there鈥檚 a whole ton of risk, which we鈥檒l get to in the discussion. But that鈥檚 really, I think, the foundation of it.

Shannon Lynch: Yeah, thank you for that context. That鈥檚 really helpful. Going back a little bit, what is the origin of AI? Where did it begin, and how did it grow into what we know it as today?

Amen Ra Mashariki: Where modern AI started, you could talk about Geoffrey Hinton at the University of Toronto and his research on deep learning. So he began to do deep research on deep learning and really crafted this concept of deep learning that we have.

There were a lot of researchers that were thinking about deep learning and AI in different ways. Essentially, you wanna think about deep learning, one thing to think of, and I'm gonna be aging myself, think of Betamax and VHS days of competition. It was the same thing. Different researchers were researching different things. Deep learning went out as the most efficient and most thoughtful way to go.

But it still didn鈥檛 really hit mainstream until there was a thing called ImageNet, which was started by a researcher out in Stanford called Fei-Fei Li, a woman researcher. So she and her team created ImageNet, which was basically this big database of images.

So now Geoffrey Hinton and his researchers came together at a conference. And this is where you brought in ImageNet, and then you brought deep learning. So then magic ensued. This is where we began to see the deep learning algorithm really connect with all of, remember I said compute data and algorithms, right? So I鈥檓 going to put a bow on this trifecta.

So you had the data, which is ImageNet. So you finally have, for the first time, this large data set of curated photos and videos and data. Then you have Geoffrey Hinton and his team building out these algorithms. Now, fast forward, this is where my former company came in to play NVIDIA with the compute. NVIDIA figured out a way with GPUs to actually do the math that is required in deep learing.

So essentially, you had deep learning, which was decent, running on a lot of data and it was doing cool, but then basically the GPUs came and then fully scale that capability and put that capability on skates. And this is what created and exploded the modern AI revolution.

And this where we talk about the scaling laws. And the scaling law says the more chips you get, the more data centers you can build, the more data centers you can build, the more powerful your deep learning can be. And it can run on this data and a more powerful yet more powerful AI.

Lilian Coral: Yeah, and I think the only kind of, I think, point I would make on this, which is like, as you tell the story of the evolution of AI, it was really higher ed and academic institutions, right, that are at the heart of how artificial intelligence was developed. And I think that that鈥檚 a really important point. Because in the current state of affairs, it鈥檚 really easy to get stuck on the names OpenAI or even NVIDIA, but really, I do think obviously the computing power that came from the private sector is what helps, to Amen鈥檚 point, really scale this out. But the heart of so much AI development was really in our higher ed academic institutions in the work that was government funded, right? Like from DARPA to the National Science Foundation.

And so, I think as we think about the history of AI and where we are today, it鈥檚 so important to try to remember the critical role that higher ed institutions play so that as we are trying to quote unquote, win the race on AI, or whatever the next generation of this technology is going to be, that it means that it will require continued investment and continued research that is very much government funded. That is very much within the heart of our higher ed institutions.

Shannon Lynch: I鈥檓 gonna switch the conversation up a little bit because I would love if we, for the rest of the conversation, can focus mostly on government use of AI. How is the U.S. government currently using AI, whether it鈥檚 in public safety, fraud detection, elections, or something else?

Lilian Coral: Yeah, I mean, you know, obviously the government has access to tons of data, has always had access to tons of data. And so what that looks like or has looked like is whether it鈥檚 in public safety, police departments use predictive analytics to anticipate where crime actually happens, right? They鈥檒l use tools like shot spotters, one of the famous ones, right, where you can actually almost be capturing sounds and then recognize the sound of like a shot and then be able to deploy resources more quickly, but obviously there has been lots of questions about. You know, whether this raises like equity or bias concerns. And so I think the public safety use of AI has kind of been more at the forefront and well known.

But there are other ways, like you mentioned fraud detection, IRS, social services were in theory we鈥檙e using AI right to flag potentially fraudulent claims or errors or things like on the immigration and border control front, right? Like we鈥檝e been using biometric data, facial recognition and sort of scoring systems and things like to be able to manage travelers, asylum, immigration cases for a very long time. And that's just getting more sophisticated.

And then on the military and defense side, governments have been using AI to surveil, to develop autonomous systems, drones as we tend to think about them, right? To be able to do cyber operations against states. And so not just the U.S., but other governments, all governments really are kind of are really deploying a lot of those technologies to be able to surveil so that they can defend their territory, but also, you know, these sort of cyber war operations, use drones as opposed to humans when fighting more tactical on the ground, military operations. So there鈥檚 a lot of ways in which AI is used.

Shannon Lynch: Yeah, and just a quick follow-up on that, because I'm curious, what are some ways that foreign governments are using AI that we may or may not be seeing here in the U.S.?

Amen Ra Mashariki: I鈥檝e seen some amazing things in countries like China, Singapore, Japan. These governments with a lot of money and a lot of control are doing a lot of really cool stuff. You go to Singapore, Singapore is absolutely doing amazing stuff with AI in a number of ways, primarily in safety and law enforcement. But also doing things in terms of housing. A lot countries are adopting these technologies.

Lilian Coral: Yeah, and I鈥檒l just add, thank you for bringing up Singapore. I love Singapore as a use case in the space. And I think Amen is completely correct. I mean, I think other countries have been able to be more deliberate in their approach to modernizing, digitizing, and then now even integrating AI into their strategy because they have these more kind of centralized governments. They have a more centralized approach. Singapore is a great example because Singapore has actually been digitizing since the early 2000s. So they have like a plan. It鈥檚 very methodical. They鈥檝e been moving through it, making investments, right?

So when we think about the outcomes of other countries and then even, you know, I often say even China. You know the reason why China is so digitized is because it鈥檚 been building the Silk Road for quite a while. And investing tons of money into high-speed internet and into all of that infrastructure, not without its issues around privacy and surveillance.

But just to say that I think what has been very different about the U.S. approach is that we are a decentralized model. We are a deliberative democracy still, and so we have public interest groups. Groups on both sides of the aisle who have ideas. And so in trying to get to that consensus, we just have not been able to be very deliberate or centralized in the way that we think about integrating AI or modernizing our data and digital infrastructure across our country. And I think Singapore, the reason why I like Singapore as a model. I mean, because you could argue Japan too, or Korea, South Korea, is that, you know, I think they have flavors of that centralized approach with a little bit of the deliberative democracy. And so they鈥檙e also not just pushing technology on their people, but slowly making the transition.

Shannon Lynch: Yeah, and I think that speed of the transition is what really scares people here in the U.S. So when governments do utilize AI, what risks to privacy and security should we be aware of? And are there any guardrails in place, or are there some that need to be instated?

Lilian Coral: So, at the heart of a lot of those, like the Singapore model as an example, is there鈥檚 strong data governance. So we don't really have a whole lot of data governance structures and common data governance across local, state, or federal. So we are in this just a very kind of loosely regulated space where then as governments and private actors use more AI, have access to more and more data.

We see cases of bias and discrimination. You do see, as Amen talked about, these are systems that are training and they鈥檙e training on data, and that data itself is biased, and there will always be some level of bias in it. But then, unfortunately, what you also see sometimes in the use cases are levels of bias, and discrimination that could have harmful effects.

So earlier, I mentioned the case of using AI in the child welfare system, right? That has great potential in that we wanna keep children safe and we know that social workers across the country are overtaxed, but we also wanna make sure that the system isn鈥檛 incorrectly flagging children and removing them from their home. And unfortunately, you do see some of that as much as you can also see a lot of potential where you鈥檙e identifying potential cases of abuse and really creating safety around that child. So bias and discrimination, huge potential issue.

Then there鈥檚 the question of transparency and accountability. AI is not a human, and we live in a world in which, you know, you want to hold someone accountable for a decision. And so I think there鈥檚 still a lot of lack of clarity about who is ultimately accountable for some of the decisions these systems are making. And then when there isn鈥檛 transparency about how that decision is made, because there isn鈥檛, you know, public transparency, say in the way that DHS uses AI to flag, you know, asylum cases, then I think that definitely creates a lot of issues and questions about the merit and the quality of the use.

And then lastly, I鈥檒l just quickly say, obviously, the bigger concerns are around mass surveillance, right? If you centralize a lot of government data, and this is, I think, a big question we鈥檙e gonna be running up against over the next couple years, because the federal government always had access to a ton of data, it鈥檚 never centralized it. And now there obviously are reports that it鈥檚 moving forward with a case to do that. So, as you have all of that data now much more centralized, you do have the capability to do mass surveillance, and you are really coming up against very critical questions of how we retain any kind of privacy in this digital world.

Shannon Lynch: Amen, was there anything that you wanted to add to that really quickly before we move on to our last question?

Amen Ra Mashariki: I鈥檒l just add really quickly that for my time at NVIDIA and then even now working at the Earth Fund one of the things that I鈥檝e seen, and you know, with a quick Google search of NVIDIA and a lot of their work around sovereign AI, is this concept of working with whole countries and developing a sovereign AI framework for that country, which just simply means instead of a piecemeal patchwork set of AI and you're buying this from this company, you're buying this from this company, you have this sovereign infrastructure. The same way you would look at a particular building as being on sovereign land. This AI infrastructure that runs all of the AI for your country is also sovereign and is focused on sovereign data and is a sovereign infrastructure. I think that the U.S. has been very slow to adopt and think about sovereign AI in any real way, but foreign countries are doing that in a big way.

Shannon Lynch: Wrapping us up here, looking ahead, what role do you see AI playing in how governments serve and surveil their citizens? Lillian, you touched on this little bit, but I would love to hear what you think the future might look like.

Lilian Coral: Well, I think within the U.S. context, I think we鈥檙e just going to see more of it, right? I think we are going to be moving down a path of just more centralizing of data, and so therefore, much greater use of surveillance. And I think a lot of other countries are further ahead in this, right. There is just much more surveillance going on than ever before.

And the societal question here is, how do we define privacy in the digital age? And we don鈥檛 have the civic infrastructure to really have those kinds of discussions anymore, but I do think that that鈥檚 gonna be critical for us in this country. I think still as a model of a deliberate democracy, and I do think the other countries are going to obviously need to do that. I mean, this is a societal conversation, so it should be a global conversation, but I think we need to think about that. Because I think it鈥檚 safe to say I have zero sense of privacy. I think there鈥檚 limited ways in which I have privacy, but I also am very sober about the fact that I have lost a ton of my privacy. And so what does it really mean to have that fundamental right? And what should it look like?

And I think primarily right now it looks like, police don鈥檛 have a right to just access all of that information and use it against me without some sort of warrant or, or some sort of reasonable suspicion, but I think there鈥檚 a tension there with how we will continue to see that kind of the erosion of that privacy even within that gambit. And so I worry, but I also think this is where we need civic infrastructures to really have that dialogue. And then all of us get educated about what AI is, what it isn鈥檛, and how we want to see it effectuated in our country and in our society.

Amen Ra Mashariki: Yeah, so I agree a hundred percent. I would say where I think that it can go, I would use an example here in Baltimore with Department of Public Works (DPW). Two months ago, someone from the Department of Public Works was killed while he was in West Baltimore cleaning, which spurred a whole set of city council hearings, and many people at DPW went to the city council.

They were at city council was just like, look, I have to do overtime just to be able to pay my bills. Like, I can鈥檛 even live with the wage that I have. And that affects Baltimoreans because then the things that need to get taken care of, the services that they expect from DPW, are just not happening because the pay is not enough. The quality of the work goes down because the pain, no one鈥檚 happy. At work, everybody fears for their lives. They have to fight drug dealers who are in alleys. So they have to go into these alleys to pick up the trash and the junk and everything. But the drug dealers are like, if you come down here, we will shoot you. So they don鈥檛 go down the alley, so those places don鈥檛 get cleaned. And so the quality of life of Baltimoreans goes down.

So you can see where AI can be a helpful resource because one, a lot of people are having this conversation about workforce, and that AI is going to take everybody鈥檚 jobs, and so on and so forth. But that conversation assumes that everybody is already functioning at 100 percent output, that you鈥檙e getting maximum output. So if you鈥檙e giving maximum output from these five employees, then yes, AI can come in and replace three of them, and you鈥檙e still getting maximum output. But what I would argue is that you probably, in many cases with city agencies and government agencies, are not maximizing your output for what the taxpayer is paying.

And so you would look at bringing on AI into those spaces as actually allowing for the maximizing of output. You don鈥檛 get rid of a single human being, you actually double, triple, quadruple, 5X their impact and their work. And so, you can help them be more targeted, be more precise in the places that they go, at the hours that they can go.

You talk about surveillance, you can fly drones and not even just do video, but do LiDAR, which is actually, it鈥檚 called light detection and ranging. It鈥檚 a way to send lasers down to basically track and understand the contours of a space because the lasers bounce off. So you can see where the garbage is, and you can use AI to detect, oh, crap, that鈥檚 a bunch of mattresses that need to get picked up immediately. And so you can keep your safety in mind, you could have it more targeted, so they don鈥檛 have to do overtime.

So, I think that modern AI has not been used in any real way, especially in city and state government, in any way to drive efficiencies in terms of how agencies and how public entities have an impact on the lives of residents.

Shannon Lynch: Amen and Lilian, thank you so much for joining me. It鈥檚 been a real pleasure to have you on. Thank you.

Lilian Coral: Thank you.

Amen Ra Mashariki: Thank you for having me.

Heidi Lewis: This was a 国产视频 production. Shannon Lynch is our host and executive producer. Our co-producers are Joe Wilkes, David Lanham, and Carly Anderson. Social media by Maika Moulite. Visuals by Alex Bir帽as, and media outreach by me, Heidi Lewis. Please rate, review, and subscribe to Democracy Deciphered wherever you like to listen.

More 国产视频 the Authors

Shannon Lynch.jpg
Shannon Lynch

Studio Manager and Podcast Production Lead

Episode 12: How Governments Use AI