国产视频

In Short

February Digital Matters

2/28: DPI Resiliency, Platform Transparency, Data Protections, and AI Development

shutterstock_1927269110.jpg

In honor of the upcoming second , February鈥檚 Digital Matters鈥擠IGI鈥檚 monthly round-up of news, notable uses of tech, and research 鈥攊s highlighting pieces about how technology impacts democracy and societies. The week of March 27-31 will be full of commitments, events, and proclamations hosted not just in Washington D.C., but also in Costa Rica, the Netherlands, Republic of Korea, and Republic of Zambia, in addition to a virtual plenary format (harnessing tech to discuss, among other critical issues, tech!).

Looking back over the past month, recent moves to advance a more equitable post-pandemic recovery by investing in digital public infrastructure (DPI) 鈥 the rails underpinning our digital economy 鈥 represent a positive path forward with the potential to narrow digital divides, modernize and improve the provision of public services, and strengthen and safeguard communities. However, DPI remains an undervalued concept in the democracy and human rights ecosystem with diverse interpretations and definitions.

The related push for greater transparency and accountability across digital solutions, particularly social media platforms, could lead to greater user safety, data protections, and possibly trust in systems governed publicly or through multi stakeholder efforts. But nothing on this front captured our collective imagination or set off alarm bells this past month like the recent developments in artificial intelligence (AI). We are sure the magnitude of issues around big data solutions will be included in the events surrounding the Summit.

How can DPI build resiliency and a brighter digital future?

Envisioning a positive pathway for tech in society is essential for empowering communities and democracies around the world. When implemented with strong governance initiatives, DPI can help reimagine technology鈥檚 role by creating universal access to essential digital services.

by Center for International Private Enterprise, National Democratic Institute, and International Republican Institute (February 7, 2023)

In response to increasing concern over technology鈥檚 negative impact, CIPE, NDI, and IRI assembled an anthology on how technology can better enable a positive, more democratic future. Readers can browse a series of thought pieces, case studies, and speculative fiction from global thought leaders, practitioners, and democratic reformers on using digital tools for good and what our world could look like in the future as tech modernizes and strengthens democracy. One of our favorites is Delfina Irazusta鈥檚

Digital Public Infrastructure in Ukraine: Harnessing Technology for the Public Good by Allison Price and Alberto Rodr铆guez-Alvarez, 国产视频 (February 22, 2023)

The urgency and ambition that sparked Ukraine鈥檚 implementation of digital public infrastructure (DPI) and the modern provision of public services may resonate well beyond the region and this moment in time. Ukraine鈥檚 investment in digital transformation has become a model of resilience and adaptability in public sector services. Diia, a mobile application launched in 2020 by Ukraine鈥檚 Ministry of Digital Transformation, connects more than 18 million Ukrainians to over 80 public services 鈥 providing continuity even at a time of war. Quickly scaled and adapted following the Russian invasion to include social benefits, education programs, digital passports, and certification of internally displaced persons, Diia helps keep the country running.

by Robyn Huang, Wired (February 8, 2023)

Although not DPI, disaster response efforts demonstrate the need for better replicable public solutions 鈥 especially at times of crisis. Over 15,000 global technologists responded to the earthquake that devastated large areas of Southeast Turkey and Syria by building and supporting digital tools to assist in disaster response, including apps that help locate those stuck in the rubble, information portals for those seeking assistance, and ways to mobilize needed aid and support. All of these tools are open source 鈥 an essential growing movement in public solutions.

by Chris Larsson, Fui Meng Liew, and Carolin Frankenhauser, World Economic Forum (January 20, 2023)

DPI can enable equitable access to connectivity and essential society-wide functions for young people. For example, protection and health services can be augmented and streamlined through secure data systems and digital identification. DPI can also serve as a critical component of digitizing education, making learning tools and resources more accessible to children everywhere. While DPI solutions are promising, they must be implemented with clear governance and regulatory frameworks to ensure data remains protected and secure.

What does the current push for greater platform transparency and accountability mean for users?

Several and have banned TikTok from student and employee devices over concerns about TikTok鈥檚 ties to China, as well as data privacy and national security risks. With calling for a nationwide ban of the app, TikTok CEO Shou Zi Chew is set to on March 23 to defend the app鈥檚 security and privacy practices. However, comprehensive platform regulations may offer a better approach than a national ban of the app, which could set a . TikTok is not the only platform under scrutiny. Other big tech companies are facing increasing pressure on the transparency and accountability front.

by Barbara Ortutay, Associated Press (February 13, 2023)

While some social media platforms are moving toward greater accessibility and transparency of their API, often in compliance with regulations, Twitter (and possibly others) are developing plans to move away from an open ecosystem model, which could have an outsized impact in the space. This month Twitter announced that it will begin charging users to access the platform鈥檚 Application Programming Interface (API), a software tool used to access a platform鈥檚 data. Twitter鈥檚 API is used by nonprofits, academics, software developers, programmers, journalists, and more to search and analyze the vast amount of data available on the platform. Removing free access to the API makes it more difficult for users to track news, hate speech, and mis/disinformation; build new products; and even through real-time data collection and map generation. Twitter has since of the paid API meant to go into effect on February 13.

by Anne Stopper and Jen Caltrider, Mozilla (February 23, 2023)

Recent research from Mozilla analyzed 40 of Google Play Store鈥檚 most popular apps and found significant discrepancies between app privacy policies and the information reported on Google鈥檚 Data Safety Form. These discrepancies, in part due to reporting loopholes in the Data Safety Form, allowed misleading information to be presented to consumers about how apps collect, share, and use data. With apps not self-reporting accurately and insufficient fact-checking by Google, Stopper and Caltrider put forth recommendations for more clearly informing users how their data is being used by apps they download.

Do new data protections and security-by-design initiatives signal prioritization of user safety?

President Biden called out extractive and exploitative data practices of Big Tech companies during the State of the Union and promoted greater user protections, especially for young people. New data privacy laws are going into effect across five states this year and could lay the groundwork for a larger federal movement on this front. The Cybersecurity and Infrastructure Security Agency (CISA) is pushing for greater company accountability through security-by-design measures. Together, these efforts signal a large and overdue movement in the U.S. to prioritize user safety online.

by Lily Hay Newman, Wired (February 7, 2023)

As the nation deepens its digital transformation process, safeguarding users is paramount to society-wide implementation and adoption of digital public infrastructure. President Biden highlighted the need for a national approach to data protections in his State of the Union address, urging bipartisan action in Congress. While the future of federal data legislation is uncertain, some states are already implementing that could force greater protections for users and their data nationwide as companies adapt to meet requirements.

by Jen Easterly and Eric Goldstein, Foreign Affairs (February 1, 2023)

In the face of increasing security breaches and ransomware attacks, Director Jen Easterly and Executive Assistant Director of Cybersecurity Eric Goldstein to take greater responsibility for user safety. Instead of placing security burdens solely on users, the authors propose that companies must be accountable for product safety and develop technology hardware and software products designed to be 鈥渟ecure by default.鈥 With government and industry working together, stronger security norms can help protect users and critical institutions.

How will AI technologies impact society and human agency?

The launch of ChatGPT set off a wave of renewed interest in generative AI technology. Tech giants like , , , and announced their own AI chatbots. With AI increasingly in the spotlight, the public (and ) have growing questions and concerns about how this technology may impact individuals and larger digital transformation efforts ranging from education and work to content moderation and creation.

by Janna Anderson and Lee Rainie, Pew Research (February 24, 2023)

Experts (and non experts alike) are split regarding how much control people will retain over essential decision-making as digital systems and AI spread. In this report from Pew Research, the experts surveyed said the future of these technologies will have both positive and negative consequences for human agency. But many worry these systems will diminish individuals鈥 ability to control their choices.

by Khari Johnson, Wired (February 16, 2023)

As more companies enter the AI race, transparency and accountability into the research and development of the systems behind chatbots are lost due to company trade secret and intellectual property claims. Researchers have warned that these chatbots can spread disinformation, use hate speech, generate violent or disturbing content, and demonstrate bias, among other ethical concerns. Now, many users are calling for greater oversight and regulation into AI development.

by Cameron F. Kerry, Brookings (February 15, 2023)

The National Institute of Standards and Technology (NIST) released an (AI RMF) in January 2023鈥攖he first version in an iterative and adaptive process鈥攖o guide safer AI development and use in the future. The AI RMF introduces socio-technical dimensions, such as impacts on people and the planet, into its risk management approach. The framework includes steps for evaluating risks and seven key characteristics for trustworthy AI systems. In addition to the AI RMF, the NIST launched a that offers organizations guidance on how to govern, map, measure, and manage AI risks.

Please let us know what you think and consider sharing this post. You can reach us at DIGI@newamerica.org or . The previous edition of Digital Matters is linked here. Make sure to check back next month for a new Digital Matters round-up or sign up to have DIGI's Digital Matters round-up sent straight to your inbox each month.

February Digital Matters