Paragraphs

Modelling of emerging vector borne diseases serves as an important complement to clinical studies of modern zoonoses. This article presents an archaeo‐historic epidemiological modelling study of Rift Valley fever (RVF), using data‐driven neural network technology. RVF affects both human and animal populations, can rapidly decimate herds causing catastrophic economic hardship, and is identified as a Category A biodefense pathogen by the US Center for Disease Control. Despite recent origins circa the early 1900s, little is known about the circumstances of its inception nor the relationships between factors that affect transmission. This evidence could be vital as the disease continues to expand from its epicentre in Kenya to other parts of Africa and the Arabian Peninsula. RVF is a relevant case for archaeological/palaeopathological investigations of disease as it intersects between numerous human, animal, spatial, temporal, and sociopolitical dimensions. By integrating landscape archaeology, historical evidence, and climatic data, with evidence of human behaviour gathered through ethnoarchaeological study, this article presents an applied framework for human–animal palaeopathology. This framework aligns with the One Health approach that observes disease to be intrinsically tied to ecological and societal factors. We provide a useable alternative way of thinking about disease modelling in the present and the past, ultimately seeking to support efforts to accurately predict future impacts. Tapping into longitudinal evidence from the last 50–300 years offers a powerful way to respond to the threat zoonoses will pose to human populations around the world as the climate warms.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
International Journal of Osteoarchaeology
Authors
Desiree LaBeaud
Jochen Kumm
Elysse Grossi‐Soyster
Alfred Anangwe
Michele Barry
-

Abstract:

Do programmatic policies always yield electoral rewards? A growing body of research attributes the adoption of programmatic policies in African states to increased electoral competition. However, these works seldom explore how the specifics of policy implementation condition voters’ electoral responses to programmatic policies over time, or changes in electoral effects throughout policy cycles. We analyze the electoral effects of both the promise and implementation of a programmatic policy designed to increase secondary school enrollment in Tanzania over three election cycles. We find that the incumbent party benefited from a campaign promise to increase access to secondary schooling, but incurred an electoral penalty following implementation of the policy. We do not find any significant electoral effects by the third electoral cycle. Our findings illuminate temporal dynamics of policy feedback, the conditional electoral effects of programmatic policies, and the need for more studies of entire policy cycles over multiple electoral periods.

 

Speaker Bio:

Image
thumbnail opaloken
Dr. Ken Opalo is an Assistant Professor at Georgetown University’s School of Foreign Service. His research interests include the political economy of development, legislative politics, and electoral accountability in African states. Ken’s current research projects include studies of political reform in Ethiopia, the politics of education sector reform in Tanzania, and electoral accountability under devolved government in Kenya. His works have been published in Governance, the British Journal of Political Science, the Journal of Democracy, and the Journal of Eastern African Studies. His first book, titled Legislative Development in Africa: Politics and Post-Colonial Legacies (Cambridge University Press, 2019) explores the historical roots of contemporary variation in legislative institutionalization and strength in Africa. Ken earned his BA from Yale University and PhD from Stanford University.

 

Ken Opalo Assistant Professor at Georgetown University’s School of Foreign Service
Seminars
-

The Shorenstein Asia-Pacific Research Center cordially invites its faculty, scholars, staff, affiliates, and their families to join APARC's first International Potluck Day! Join us to celebrate the diversity of APARC through a multicultural smorgasbord of food. Bring a dish from your home country or family heritage to share with the APARC community as we take the time to mix, mingle, and celebrate the diversity that makes APARC special.

Due to current circumstances, we will be postponing this event until further notice. Thank you for your understanding.

-

The research on misinformation generally and fake news specifically is vast, as is coverage in media outlets. Two questions run throughout both the academic and public discourse: what explains the spread of fake news online, and what can be done about it? While there is substantial literature on who is likely to be exposed to and share fake news, these behaviors might not signal belief or effect. Conversely, there is far less work on who is able to differentiate between true and false stories and, as a result, who is most likely to believe fake news (or, conversely, not believe true news), a question that speaks directly to Facebook’s recent “community review” approach to combating the spread of fake news on its platform.

In his talk, Professor Tucker will report on initial findings from a new collaborative project between NYU’s Center for Social Media and Politics and Stanford’s Program on Democracy and the Internet designed to fill these gaps in the scholarly literature and inform the types of policy decisions being made by Facebook. The project has enlisted both professional fact checkers and random “crowds” of close to 100 people to fact check five “fresh” articles (that have appeared in the past 24 hours) per day, four days a week, for eights week using an innovative transparent and replicable algorithm for selecting the articles for fact checking. He will report on initial observations regarding (a) individual determinants of fact checking proficiency; (b) the viability using the “wisdom of the crowds” for fact checking, including examining the tradeoffs between crafting a more accurate crowd vs. a more representative crowd and (c) results from experiments designed to assess potential policy interventions to improve crowdsourcing accuracy.

About the Speaker:

Image
Joshua Tucker
Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, a co-Director of the NYU Social Media and Political Participation (SMaPP) laboratory, a co-Director of the new NYU Center for Social Media and Politics, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. He serves on the advisory boards of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals. Originally a scholar of post-communist politics, he has more recently studied social media and politics. His research in this area has included studies on the effects of network diversity on tolerance, partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. His research has been funded by over $8 million in grants in the past three years, including a 2019 Knight Foundation “Research on the Future of an Informed Society” grant. His most recent book is the co-authored Communism’s Shadow: Historical Legacies and Contemporary Political Attitudes (Princeton University Press, 2017), and he is the co-editor of the forthcoming edited volume Social Media and Democracy (Cambridge University Press, 2020). 

News Type
Q&As
Date
Paragraphs

A Q&A with Professor Stephen Stedman, who serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age.

Image
Stedman Steve
Stephen Stedman, a Senior Fellow at the Freeman Spogli Institute for International Studies (FSI) at Stanford, is the director of the Kofi Annan Commission on Elections and Democracy in the Digital Age, an initiative of the Kofi Annan Foundation. The Commission is focused on studying the effects of social media on electoral integrity and the measures needed to safeguard the democratic process.  

At the World Economic Forum in Davos, Switzerland, the Commission which includes FSI’s Nathaniel Persily, Alex Stamos, and Toomas Ilves, launched a new report, Protecting Electoral Integrity in the Digital Age. The report takes an in-depth look at the challenges faced by democracy today and makes a number of recommendations as to how best to tackle the threats posed by social media to free and fair elections. On Tuesday, February 25, professors Stedman and Persily will discuss the report’s findings and recommendations during a lunch seminar from 12-1:15 PM. To learn more and to RSVP, visit the event page.

Q: What are some of the major findings of the report? Are digital technologies a threat to democracy?

Steve Stedman: Our report suggests that social media and the Internet pose an acute threat to democracy, but probably not in the way that most people assume. Many people believe that the problem is a diffuse one based on excess disinformation and a decline in the ability of citizens to agree on facts. We too would like the quality of deliberation in our democracy to improve and we worry about how social media might degrade democratic debate, but if we are talking about existential threats to democracy the problem is that digital technologies can be weaponized to undermine the integrity of elections.

When we started our work, we were struck by how many pathologies of democracy are said to be caused by social media: political polarization; distrust in fellow citizens, government institutions and traditional media; the decline in political parties; democratic deliberation, and on and on. Social media is said to lessen the quality of democracy because it encourages echo chambers and filter bubbles where we only interact with those who share our political beliefs. Some platforms are said to encourage extremism through their algorithms.

What we found, instead, is a much more complex problem. Many of the pathologies that social media are said to create – for instance, polarization, distrust, and political sorting begin their trendlines before the invention of the Internet, let alone the smart phone. Some of the most prominent claims are unsupported by evidence, or are confounded by conflicting evidence. In fact, we say that some assertions simply cannot be judged without access to data held by the tech platforms.

Instead, we rely on the work of scholars like Yochai Benkler and Edda Humphries to argue that not all democracies are equally vulnerable to network propaganda and disinformation. It is precisely where you have high pre-existing affective polarization, low trust, and hyperpartisan media, that digital technologies can intensify and amplify polarization.

Elections and toxic polarization are a volatile mix. Weaponized disinformation and hate speech can wreak havoc on elections, even if they don’t alter the vote tallies. This is because democracies require a system of mutual security. In established democracies political candidates and followers take it for granted that if they lose an election, they will be free to organize and contest future elections. They are confident that the winners will not use their power to eliminate them or disenfranchise them. Winners have the expectation that they hold power temporarily, and accept that they cannot change the rules of competition to stay in power forever. In short, mutual security is a set of beliefs and norms that turn elections from being a one-shot game into a repeated game with a long shadow of the future.

In a situation already marred by toxic polarization, we fear that weaponized disinformation and hate speech can cause parties and followers to believe that the other side doesn’t believe in the rules of mutual security. The stakes become higher. Followers begin to believe that losing an election means losing forever. The temptation to cheat and use violence increases dramatically. 

Q: As far as political advertising, the report encourages platforms to provide more transparency about who is funding that advertising. But it also asks that platforms require candidates to make a pledge that they will avoid deceptive campaign practices when purchasing ads. It also goes as far as to recommend financial penalties for a platform if, for example, a bot spreading information is not labelled as such. Some platforms might argue that this puts an unfair onus on them. How might platforms be encouraged to participate in this effort?

SS: The platforms have a choice: they can contribute to toxic levels of political polarization and the degradation of democratic deliberation, or they can protect electoral integrity and democracy. There are a lot of employees of the platforms who are alarmed at the state of polarization in this country and don’t want their products to be conduits of weaponized disinformation and hate speech. You saw this in the letter signed by Facebook employees objecting to the decision by Mark Zuckerberg that Facebook would treat political advertising as largely exempt from their community standards. If ever there were a moment in this country that we should demand that our political parties and candidates live up to a higher ethical standard it is now. Instead Facebook decided to allow political candidates to pay to run ads even if the ads use disinformation, tell bald-faced lies, engage in hate speech, and use doctored video and audio. Their rationale is that this is all part of “the rough and tumble of politics.” In doing so, Facebook is in the contradictory position that it has hundreds of employees working to stop disinformation and hate speech in elections in Brazil and India, but is going to allow politicians and parties in the United States to buy ads that can use disinformation and hate speech.

Our recommendation gives Facebook an option that allows political advertisement in a way that need not enflame polarization and destroy mutual security among candidates and followers: 1.) Require that candidates, groups or parties who want to pay for political advertising on Facebook sign a pledge of ethical digital practices; 2.) Then use the standards to determine if an ad meets the pledge or not. If an ad uses deep fakes, if an ad grotesquely distorts the facts, if an ad out and out lies about what an opponent said or did, then Facebook would not accept the ad. Facebook can either help us raise our electoral politics out of the sewer or it can ensure that our politics drowns in it.

It’s worth pointing out that the platforms are only one actor in a many-sided problem. Weaponized disinformation is actively spread by unscrupulous politicians and parties; it is used by foreign countries to undermine electoral integrity; and it is often spread and amplified by irresponsible partisan traditional media. Fox News, for example, ran the crazy conspiracy story about Hilary Clinton running a pedophile ring out of a pizza parlor in DC. Individuals around the president, including the son of the first National Security Adviser tweeted the story. 

Q: While many of the recommendations focus on the role of platforms and governments, the report also proposes that public authorities promote digital and media literacy in schools as well as public interest programming for the general population. What might that look like? And how would that type of literacy help protect democracy? 

SS: Our report recommends digital literacy programs as a means to help build democratic resilience against weaponized disinformation. Having said that however, the details matter tremendously. Sam Wineburg at Stanford, who we cite, has extremely insightful ideas for how to teach citizens to evaluate the information they see on the Internet, but even he puts forward warnings: if done poorly digital literacy could simply increase citizen distrust of all media, good and bad; digital literacy in a highly polarized context begs the question of who will decide what is good and bad media. We say in passing that in addition to digital literacy we need to train citizens to understand biased assimilation of information. Digital literacy trains citizens to understand who is behind a piece of information and who benefits from it. But we also need to teach citizens to stand back and ask, “why am I predisposed to want to believe this piece of information?”

Q: Obviously access to data is critical for researchers and commissioners to do their work, analysis and reporting. One of the recommendations asks that public authorities compel major internet platforms to share meaningful data with academic institutions. Why is it so important for platforms and academia to share information?

SS: Some of the most important claims about the effects of social media can’t be evaluated without access to the data. One example we cite in the report is the controversy about whether YouTube’s algorithms radicalize individuals and send them down a rabbit hole of racist, nationalist content. This is a common claim and has appeared on the front pages of the New York Times. The research supporting the claim, however, is extremely thin, and other research disputes it. What we say is that we can’t adjudicate this argument unless YouTube were to share its data, so that researchers can see what the algorithm is doing. There are similar debates concerning the effects of Facebook. One of our commissioners, Nate Persily, has been at the forefront of working with Facebook to provide certified researchers with privacy protected data – Social Science One. Progress has been so slow that the researchers have lost patience. We hope that governments can step in and compel the platforms to share the data.

Q: This is one of the first reports to look at this problem in the Global South. Is the problem more or less critical there?

SS: Kofi Annan was very concerned that the debate about digital technologies and democracy was far too focused on Europe and the United States. Before Cambridge Analytica’s involvement in the United States and Brexit elections of 2016, its predecessor company had manipulated elections in Asia, Africa and the Caribbean. There is now a transnational industry in election manipulation.

What we found does not bode well for democracies in the rest of the world. The factors that make democracies vulnerable to network propaganda and weaponized disinformation are often present in the Global South: pre-existing polarization, low trust, and hyperpartisan traditional media. Many of these democracies already have a repertoire of electoral violence. 

On the other hand, we did find innovative partnerships in Indonesia and Mexico where Election Management Bodies, civil society organizations, and traditional media cooperated to fight disinformation during elections, often with success. An important recommendation of the report is that greater attention and resources are needed for such efforts to protect electoral integrity in the Global South. 

About the Commission on Elections and Democracy in the Digital Age

 As one of his last major initiatives, in 2018 Kofi Annan convened the Commission on Elections and Democracy in the Digital Age. The Commission includes members from civil society and government, the technology sector, academia and media; across the year 2019 they examined and reviewed the opportunities and challenges for electoral integrity created by technological innovations. Assisted by a small secretariat at Stanford University and the Kofi Annan Foundation, the Commission has undertaken extensive consultations and issue recommendations as to how new technologies, social media platforms and communication tools can be harnessed to engage, empower and educate voters, and to strengthen the integrity of elections. Visit  the Kofi Annan Foundation and the Commission on Elections and Democracy in the Digital Age for more on their work.

All News button
1
-

Join Stephen Stedman, Nathaniel Persily, the Cyber Policy Center, and the Center on Democracy, Development and the Rule of Law (CDDRL) in an enlightening exploration of the recent report, Protecting Electoral Integrity in the Digital Age, put out by the Kofi Annan Commission on Elections and Democracy in the Digital Age. Moderated by Kelly Born, Executive Director of the Cyber Policy Center.

More on the report:

 

Abstract:

New information and communication technologies (ICTs) pose difficult challenges for electoral integrity. In recent years foreign governments have used social media and the Internet to interfere in elections around the globe. Disinformation has been weaponized to discredit democratic institutions, sow societal distrust, and attack political candidates. Social media has proved a useful tool for extremist groups to send messages of hate and to incite violence. Democratic governments strain to respond to a revolution in political advertising brought about by ICTs. Electoral integrity has been at risk from attacks on the electoral process, and on the quality of democratic deliberation.

The relationship between the Internet, social media, elections, and democracy is complex, systemic, and unfolding. Our ability to assess some of the most important claims about social media is constrained by the unwillingness of the major platforms to share data with researchers. Nonetheless, we are confident about several important findings.

About the Speakers

Image
Stephen Stedman
Stephen Stedman is a senior fellow at the Freeman Spogli Institute for International Studies, professor, by courtesy, of political science, and deputy director of the Center on Democracy, Development and Rule of Law. Professor Stedman currently serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age, and is the principal drafter of the Commission’s report, “Protecting Electoral Integrity in the Digital Age.”

Professor Stedman served as a special adviser and assistant secretary general of the United Nations, where he helped to create the United Nations Peacebuilding Commission, the UN’s Peacebuilding Support Office, the UN’s Mediation Support Office, the Secretary’s General’s Policy Committee, and the UN’s counterterrorism strategy. During 2005 his office successfully negotiated General Assembly approval of the Responsibility to Protect. From 2010 to 2012, he directed the Global Commission on Elections, Democracy, and Security, an international body mandated to promote and protect the integrity of elections worldwide.  Professor Stedman served as Chair of the Stanford Faculty Senate in 2018-2019. He and his wife Corinne Thomas are the Resident Fellows in Crothers, Stanford’s academic theme house for Global Citizenship. In 2018, Professor Stedman was awarded the Lloyd B. Dinkelspiel Award for outstanding service to undergraduate education at Stanford.

Image
Nathaniel Persily

Nathaniel Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication and FSI.  Prior to joining Stanford, Professor Persily taught at Columbia and the University of Pennsylvania Law School, and as a visiting professor at Harvard, NYU, Princeton, the University of Amsterdam, and the University of Melbourne. Professor Persily’s scholarship and legal practice focus on American election law or what is sometimes called the “law of democracy,” which addresses issues such as voting rights, political parties, campaign finance, redistricting, and election administration. He has served as a special master or court-appointed expert to craft congressional or legislative districting plans for Georgia, Maryland, Connecticut, and New York, and as the Senior Research Director for the Presidential Commission on Election Administration.

Also among the commissioners of the report were FSI's Alex Stamos, and Toomas Ilves

 

 

Stephen Stedman
-

Abstract:

China’s cyberspace and technology regime is going through a period of change—but it’s taking a while. The U.S.–China economic and tech competition both influences Chinese government developments and awaits their outcomes, and the 2017 Cybersecurity Law set up a host of still-unresolved questions. Data governance, security standards, market access, compliance, and other questions saw only modest new clarity in 2019. But 2020 promises new laws on personal information protection and data security, and the Stanford-based DigiChina Project in the Program on Geopolitics, Technology, and Governance, is devoted to monitoring, translating, and explaining these developments. From AI governance to the the nexus of cybersecurity and supply chains, this talk will summarize recent Chinese policymaking and lay out expectations for the year to come.

Image
Graham Webster
About the Speaker:

Graham Webster is editor in chief of the Stanford–New America DigiChina Project at the Stanford University Cyber Policy Center and a China digital economy fellow at New America. He was previously a senior fellow and lecturer at Yale Law School, where he was responsible for the Paul Tsai China Center’s U.S.–China Track 2 and Track 1.5 dialogues for five years before leading programming on cyberspace and technology issues. In the past, he wrote a CNET News blog on technology and society from Beijing, worked at the Center for American Progress, and taught East Asian politics at NYU's Center for Global Affairs. Webster holds a master's degree in East Asian studies from Harvard University and a bachelor's degree in journalism from Northwestern University. Webster also writes the independent Transpacifica e-mail newsletter.

0
Research Scholar
Graham Webster

Graham Webster is a research scholar and editor in chief of the DigiChina Project at the Stanford University Cyber Policy Center and a China digital economy fellow at New America. Based at Stanford, he leads an inter-organization network of specialists to produce analysis and translation on China’s digital policy developments. He researches, publishes, and speaks to diverse audiences on the intersection of U.S.–China relations and advanced technology.

From 2012 to 2017, Webster worked for Yale Law School as a senior fellow and lecturer responsible for the Paul Tsai China Center’s Track 2 dialogues between the United States and China, co-teaching seminars on contemporary China and Chinese law and policy, leading programming on cyberspace in U.S.–China relations, and writing extensively on the South China Sea and the law of the sea. While with Yale, he was a Yale affiliated fellow with the Yale Information Society Project, a visiting scholar at China Foreign Affairs University, and a Transatlantic Digital Debates fellow with the Global Public Policy Institute and New America.

He was previously an adjunct instructor teaching East Asian politics at New York University, a public policy and communications officer at the EastWest Institute, a Beijing-based journalist writing on technology in China for CNET News and other outlets, and an editor at the Center for American Progress. He has worked as a consultant to Privacy International, the National Bureau of Asian Research, the Clinton Global Initiative, and the Natural Resources Defense Council’s China Program.

Webster writes for both specialist and general audiences, including for the MIT Technology Review, Foreign Affairs, Slate, The Washington Post’s Monkey Cage, BBC Chinese, Lawfare, ChinaFile, The Diplomat, Fortune, ArtAsiaPacific, and Logic magazine. He has been quoted by The Wall Street Journal, The Washington Post, Reuters, Bloomberg News, Wired, Caixin, and Quartz; spoken to NPR and BBC World Service radio; and appeared on BBC World News, CBSN, Channel News Asia, and Deutsche Welle television. Webster has testified before the U.S.–China Economic and Security Review Commission and speaks regularly at universities and conferences in North America, East Asia, and Europe.

Webster holds a B.S. in journalism and international studies from Northwestern University and an A.M. in East Asian studies from Harvard University. He took Ph.D. coursework in political science at the University of Washington and language training at Tsinghua University, Peking University, Stanford University, and Kanda University of International Studies.

Editor-in-Chief, DigiChina
Date Label
Graham Webster
-

Multilateral Negotiations on ICTs (information and communications technologies) and International Security: Process and Prospects for the UN Group of Government Experts and the UN Open-Ended Working Group

Abstract: The intent of this seminar is to provide an update on recent events at the UN relevant to international discussions of cybersecurity (and a primer of sorts on current UN processes for addressing this topic).

In 2018, UN Member States decided to establish two concurrent negotiations with nearly identical mandates on the international security dimension of ICTs—a sixth limited membership UN Group of Governmental Experts (GGE) and an Open-Ended Working Group (OEWG) open to all governments. How did this happen? Are they competing or complementary endeavors? Is it likely that one will be able to bridge the longstanding divides on how international law applies to cyberspace or agree by consensus to additional norms of responsible State behavior? What would be a good outcome of each process? And how do these negotiations fit into the wider UN ecosystem, including the follow-up to the Secretary-General’s High Level Panel on Digital Cooperation.  

Image
Kerstin Vignard
About the Speaker: Kerstin Vignard is an international security policy professional with nearly 25 years’ experience at the United Nations, with a particular interest in the nexus of international security policy and technology. Vignard is Deputy to the Director at UNIDIR, currently on temporary assignment leading UNIDIR’s team supporting the Chairmen of the latest Group of Governmental Experts (GGEs) on Cyber Security and the Open-Ended Working Group. She has led UNIDIR’s team supporting four previous cyber GGEs. From 2013 to 2018, she initiated and led UNIDIR’s work on the weaponization of increasingly autonomous technologies, and is the co-Principal Investigator of a CIFAR AI & Society grant examining potential regulatory approaches for security and defence applications of AI.

Paragraphs

Despite pressure from President Donald Trump and Attorney General William Barr, Apple continues to stand its ground and refuses to re-engineer iPhones so law enforcement can unlock the devices. Apple has maintained that it has done everything required by law and that creating a "backdoor" would undermine cybersecurity and privacy for iPhone users everywhere.

Apple is right to stand firm in its position that building a "backdoor" could put user data at risk.

At its most basic, encryption is the act of converting plaintext (like a credit card number) into unintelligible ciphertext using a very large, random number called a key. Anyone with the key can convert the ciphertext back to plaintext. Persons without the key cannot, meaning that even if they acquire the ciphertext, it should still be impossible for them to discover the meaning of the underlying plaintext.

Full Text at CNN

 

 

 

 

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
Andrew Grotto
Andrew Grotto
0
Tracy Navichoque portrait

Tracy Navichoque is the Program Manager at the Global Digital Policy Incubator (GDPi). Before coming to Stanford, Tracy was the Membership and Education Manager at the Los Angeles World Affairs Council. She holds an MA in Public Diplomacy from USC and BA in History and International Studies from Northwestern University. She was a Fulbright Scholar to Uruguay and worked in education and public affairs at the binational center in Montevideo. She serves as a Gilman International Scholarship Alumni Ambassador.

Program Manager at the Global Digital Policy Incubator (GDPi)
Date Label
Subscribe to Sub-Saharan Africa