国产视频

In Short

November Digital Matters

11/30 - New action on AI governance, the importance of human-centered design, calls for expanded safety measures, and trusted DPI structures

unnamed (1)
Shutterstock

This month鈥檚 Digital Matters鈥攐ur monthly round-up of news, research, events, and notable uses of tech鈥攏avigates a variety of government and international approaches to AI regulation, exploring how the many international fora and national initiatives center (or don鈥檛 center) the wellbeing of technology users. There鈥檚 a lot to cover: on the heels of the Biden administration鈥檚 Executive Order on AI, the United Kingdom hosted the 鈥渇irst ever鈥 global AI Safety Summit, while the G7 released a set of for AI governance as part of the multilateral Hiroshima Process. Over at the UN鈥檚 New York headquarters, the began a series of multi-stakeholder convenings aimed at fostering a globally-inclusive approach to international governance of AI. Meanwhile, members of Congress continue to introduce to inject transparency and accountability into 鈥渉igh-impact鈥 AI systems. Complicating matters, the is-he-running-it or running-from-it saga of Sam Altman and OpenAI makes it even murkier to understand the tech for humanity space (at the time of publishing this piece, it appears he will be running-it with a new ).

The pieces featured in the November Digital Matters take on key questions, such as how AI can be leveraged to and improve the relationship between government and citizens, how conversations about tech governance can be more inclusive, and how civil society and governments can push developers to prioritize human-centered design and built-in protections for tech鈥檚 and . We also take a look at two DPI umbrella structures: the global , an initiative of the Indian G20 Presidency, and the led by the UNDP that will feature a strong focus on digital ID. Both initiatives merit further examination into how DPI can serve as a tool towards inclusive development.

Will the many ongoing efforts to govern AI development and use coalesce into a solidified global approach?

As efforts to regulate AI technology accelerate, the public and private sector have been grappling with understanding the roles and responsibilities of various stakeholders. While technological advances often move faster than policy, we are hopeful that new momentum on AI governance can help build public-private partnerships that take a proactive approach to risks posed by these new technologies, and incentivise safety by design to protect end-users.

Public Interest Technologists React to Executive Order on AI by Public Interest Technology (PIT) at 国产视频 (Nov 16, 2023)

The Executive Order on AI contains a great deal of homework for federal agencies and civil society. In a new blog post from the PIT team, four leading public interest technologists react to the order. Afua Bruce of AnB Advisors commends the order鈥檚 focus on addressing cybersecurity risks posed by AI, and urges federal agencies to use AI 鈥渢o protect critical federal systems.鈥 Charlton McIlwain of New York University notes the order is 鈥渟trikingly clear and straightforward鈥 in its emphasis on using AI to protect civil rights and advance equity. Beth Simone Noveck of Northeastern University argues the order does little to address how AI can be used to streamline and , which can make it 鈥渆asier for governments to listen to their citizens.鈥 While the executive order is a major step forward as the U.S. seeks to catch up to the pace of innovation, it is important to emphasize that it is only a starting point and more specific goals will need to be identified to ensure the order fulfills its promise to put citizens first.

, hosted by the United Kingdom (Nov 1-2, 2023)

During the first week of November, the United Kingdom hosted the 鈥渇irst global鈥 AI Safety Summit, which brought together representatives from countries leading on AI development, industry experts, and civil society researchers and advocates for a two-day discussion on the safe and responsible development of AI. Key outcomes from the summit included the , which emphasized the need for AI systems to prioritize 鈥嬧媓uman-centric design, protect human rights and fundamental freedoms, and help bridge the digital divide. The UK summit is only the starting point for building global consensus on what safe AI development looks like: future summits hosted by South Korea and France will be held over the next year.

, G7 Hiroshima Process (Oct 30, 2023)

On the same day as the Biden administration released its Executive Order on AI, the Group of Seven (G7) leaders released their Guiding Principles and Code of Conduct on Artificial Intelligence, as part of the multilateral Hiroshima Process inaugurated in May. The principles, which aim to build international standards and best practices for the design, development, deployment and use of advanced AI systems, also include a voluntary Code of Conduct for AI developers. We hope that multinational convenings like the UK and G7 summits can promote the development of global guardrails that cut across national borders to ensure human rights and international norms are protected in the face of rapid AI development.

, State Department (Nov 17, 2023)

Digital governance efforts this month also went beyond AI. Earlier this month, the U.S. hosted the latest convening of APEC Economic Leaders in San Francisco; high on the agenda for this year鈥檚 meeting was building a 鈥淒igital Pacific鈥 through the advancement of digital skills and connectivity. One of the key outcomes of the summit was the Digital Pacific Agenda, which commits the U.S. to working with APEC economies to shape the sustainable development of emerging digital technologies. APEC member states also endorsed a set of principles for facilitating access to , an initiative that will promote interoperability and access to public sector data. According to the principles, 鈥渨hen governments choose to make data available to the public, these datasets can enable innovation, foster government transparency and efficiency, and enable citizens to be more informed.鈥

Will a human-centric tech governance approach include transparency, ethics, and accountability measures?

As new approaches to AI regulation continue to evolve and take shape at national and international levels, Gordon LaForge and Patricia Gruver-Barr remind us that in order for tech governance to be equitable, these conversations must be inclusive, and expand beyond the usual voices and countries. Fei-Fei Li argues that the ultimate driver and ethical underpinning of technological innovation should be improving the human condition, while Arati Prabhakar discusses how the Biden Executive Order on AI seeks to re-center societal risks in its governance approach. As the race to govern AI and other cutting-edge digital technologies only accelerates, these voices remind us to slow down and consider who we may be leaving out at every step of the way.

, Carnegie Endowment (Nov 14, 2023)

Arati Prabhakar, the Director of the White House Office of Science and Technology Policy discussed the Biden administration's approach at a recent Carnegie Endowment event on AI governance. In her remarks, Prabhakar noted that the recently-released Executive Order seeks to strike a balance between innovation and regulation in order to manage risks while seizing the benefits to be gained from AI. Prabhakar also emphasized that too often, we tend to focus on technological capabilities, rather than the very human choices that decide what to automate, what to connect, and what data goes into algorithms. We agree that increased transparency and accountability during the development process can help ensure AI cannot be used to 鈥渕agnify discrimination at scale,鈥 as Prabhakar put it. [Listen to Prabhakar鈥檚 11/14 remarks at Carnegie, ].

by Gordon LaForge and Patricia Gruver-Barr, Tech Policy Press (Nov 17, 2023)

Gordon LaForge and Patricia Gruver-Barr, of 国产视频鈥檚 Planetary Politics Initiative strike a positive tone on the momentum created by new governance initiatives on AI, which they note, represent a 鈥渞efreshing course correction鈥 from previous efforts to set common standards, practices, and regulations around new digital technologies. However, they warn that high-level discussions on AI safety must expand beyond like-minded countries and take a whole-of-society approach to AI risks. Such risks include how these systems may increase global inequality, reinforce systemic injustice, and continue to sideline marginalized populations. There is much that leaders on AI governance can and must do to ensure these systems can promote economic social mobility and inclusive growth rather than widen the digital divide. [Read LaForge and Patricia Gruver-Barr鈥檚 full report, 鈥淕overning the Digital Future,鈥 here.]

New Book: by Fei-Fei Li (Nov, 2023)

In 鈥淭he Worlds I See,鈥 computer science heavyweight Fei-Fei Li chronicles her journey as an immigrant in America to becoming one of the leading voices on AI research and governance. Li, who is the co-director of Stanford鈥檚 Human-Centered AI Institute, has been a longtime advocate for a human-first approach to technological development that seeks to enhance, rather than replace, human capabilities. 鈥淲e should put humans in the center of the development, as well as the deployment applications and governance of AI," she writes. We couldn鈥檛 agree more.

Who will protect the safety of users interacting with digital solutions and ensure the most vulnerable users don鈥檛 fall through the cracks?

Alongside governmental and international efforts to promote AI safety and security, calls for tech companies to rein in powerful products have only gotten louder. New reports and analysis by civil society actors uncover several disturbing examples of how tech can fall short in protecting its most vulnerable users. Technology on its own isn鈥檛 ethical, equitable or inclusive 鈥 we believe it is up to policymakers and industry leaders to take the initiative to implement guardrails that protect the rights of all users.

, UNDP (Nov 9, 2023)

Collaboration between governments can help ensure that digital transformation occurs in an equitable and inclusive manner. This month, the UNDP announced new momentum around digital public infrastructure as 11 鈥渇irst-mover鈥 countries committed to design, implement, and scale at least one DPI component by 2028, as part of the UN 50-in-5 campaign. The campaign hopes to 鈥渞adically shorten鈥 country-level DPI implementation by sharing knowledge, best practices, and digital public goods between countries. We鈥檙e excited to see how collaborative efforts between countries can accelerate the adoption of DPI across country income levels, geography, and different places in the digital development journey.

, initiative of the Indian G20 Presidency (Nov, 2023)

In a leading example of code-led diplomacy and as a follow up to the G20 Presidency, India's Ministry of Electronics and Information Technology (MeitY) created a Global Digital Public Infrastructure Repository 鈥 a collection of code created by governments, made freely available to other nations. The GDPIR is designed to be a resource for key lessons and knowledge available from G20 members and guest countries, enabling easy discoverability. It is aimed at addressing the existing knowledge gap around the right practices to design, build, and deploy population scale DPI. Each contributing participant can choose to display any information at their discretion, which can help others to develop DPI. The repository is live, and contributors include: Argentina, Australia, Bangladesh, Brazil, European Union, France, Germany, India, Italy, Japan, Mauritius, Nigeria, Oman, Republic of Korea, Russia, and Singapore.

, Duke University (Nov, 2023)

In a new study released this month, researchers at the Duke University Sanford School of Public Policy take aim at the multi-billion-dollar data brokerage industry, which comprises companies that profit off of the gathering, aggregating, and selling of data on Americans. The study鈥檚 authors uncovered the industry鈥檚 willingness to sell private data on current and former U.S. military personnel on the cheap with minimal vetting. The data obtained by researchers included sensitive details such as individuals' names, addresses, family members, and health statuses, raising not only privacy issues, but also critical national security risks. The study adds to growing calls for government action to manage the data brokerage ecosystem. Currently, no comprehensive federal consumer privacy law exists in the U.S., but that all could change as studies like these continue to expose gaps in the responsible and ethical sharing and use of data.

The Intersection of Federal Privacy Legislation & AI Governance, Event Recording, Open Technology Institute at 国产视频 (Nov 15, 2023)

Continuing the conversation on data privacy, earlier this month experts on privacy and AI came together virtually to discuss how the implementation of federal privacy rules would address harms that stem from the misuse of data that power AI systems. In her opening remarks, keynote speaker Rep. Cathy McMorris Rodgers (R-WA), Chair of the House Energy and Commerce Committee, argued for a national data privacy and security standard to safeguard American鈥檚 information. Panelists also called for more transparency from big tech on how their algorithms take in, analyze, and use personal data and make predictions. We believe more needs to be done by government actors to create safeguards and promote trusted practices for how tech companies handle personal data.

, Tech Policy Press (Nov 17, 2023)

This month, the Family Institute for Online Safety hosted its annual conference amid new global action and legislation on digital rights. This year鈥檚 theme, 鈥淣ew Frontiers in Online Safety,鈥 covered topics at the intersection of online safety and parenting, such as content moderation and privacy. Though the U.S. does not currently have comprehensive online safety legislation, the introduction of the UK鈥檚 Online Safety Act last month and new stipulations to the EU鈥檚 Digital Services Act will affect how tech companies and platforms ensure compliance across jurisdictions. Aligning global standards around online safety is critical not only for the tech industry, but also for the consistent protection of tech users, parents and kids alike.

Events

December 4-6

Carnegie India is hosting the Global Technology Summit and will address the momentum surrounding digital public infrastructure. The Summit will also explore use cases of AI, the evolving regulatory landscape, and issues such as skilling and innovation. The Global Technology Summit brings together industry experts, policymakers, scientists, and other stakeholders to deliberate on the changing nature of technology and geopolitics. This will be a hybrid event.

, December 7-8

GWU鈥檚 Digital Trade and Data Governance Hub and the NIST-NSF Trustworthy AI Institute, along with several partners, are hosting a two-day conference (December 7-8) in Washington DC to discuss data governance and AI. The global popularity and use of large language models for generative AI have revealed enforcement problems as well as gaps in the governance of data at the national and international levels. This will be a hybrid event.

Please consider sharing this post. If you have ideas or links you think we should know about, you can reach us at DIGI@newamerica.org or @DIGI_NewAmerica.

November Digital Matters