0
George G.C. Parker Professor of Finance and Economics, Stanford Graduate School of Business
Director of the Corporations and Society Initiative, Stanford Graduate School of Business
Director of the Program on Capitalism and Democracy, Center on Democracy, Development and the Rule of Law
Senior Fellow, Stanford Institute for Economic Policy Research
Senior Fellow (by courtesy), Freeman Spogli Institute for International Studies
anat_admati-stanford-2021.jpg

Anat R. Admati is the George G.C. Parker Professor of Finance and Economics at Stanford University Graduate School of Business (GSB), a Faculty Director of the GSB Corporations and Society Initiative, and a senior fellow at Stanford Institute for Economic Policy Research. She has written extensively on information dissemination in financial markets, portfolio management, financial contracting, corporate governance and banking. Admati’s current research, teaching and advocacy focus on the complex interactions between business, law, and policy with focus on governance and accountability.

Since 2010, Admati has been active in the policy debate on financial regulations. She is the co-author, with Martin Hellwig, of the award-winning and highly acclaimed book The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It (Princeton University Press, 2013; bankersnewclothes.com). In 2014, she was named by Time Magazine as one of the 100 most influential people in the world and by Foreign Policy Magazine as among 100 global thinkers.

Admati holds BSc from the Hebrew University, MA, MPhil and PhD from Yale University, and an honorary doctorate from University of Zurich. She is a fellow of the Econometric Society, the recipient of multiple fellowships, research grants, and paper recognition, and is a past board member of the American Finance Association. She has served on a number of editorial boards and is a member of the FDIC’s Systemic Resolution Advisory Committee, a former member of the CFTC’s Market Risk Advisory Committee, and a former visiting scholar at the International Monetary Fund.

Date Label
News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

Paragraphs

Image
city skyline

The Data Delusion: Protecting Individual Data Isn't Enough When The Harm is Collective

Author: Martin Tisné, Managing Director, Luminate

Editor: Marietje Schaake, International Policy Director, Cyber Policy Center

The threat of digital discrimination

On March 17, 2018, questions about data privacy exploded with the scandal of the previously unknown consulting company Cambridge Analytica. Lawmakers are still grappling with updating laws to counter the harms of big data and AI.

In the Spring of 2020, the Covid-19 pandemic brought questions about sufficient legal protections back to the public debate, with urgent warnings about the privacy implications of contact tracing apps. But the surveillance consequences of the pandemic’s aftermath are much bigger than any app: transport, education, health systems and offices are being turned into vast surveillance networks. If we only consider individual trade-offs between privacy sacrifices and alleged health benefits, we will miss the point. The collective nature of big data means people are more impacted by other people’s data than by data about them. Like climate change, the threat is societal and personal.

In the era of big data and AI, people can suffer because of how the sum of individual data is analysed and sorted into groups by algorithms. Novel forms of collective data-driven harms are appearing as a result: online housing, job and credit ads discriminating on the basis of race and gender, women disqualified from jobs on the basis of gender and foreign actors targeting light-right groups, pulling them to the far-right. Our public debate, governments, and laws are ill-equipped to deal with these collective, as opposed to individual, harms.

Read the full paper >

 
All Publications button
1
Publication Type
White Papers
Publication Date
Authors
Marietje Schaake
Marietje Schaake
-

Image
Image of Marietje Schaake, Jessica Gonzalez and David Sifry, speaking on stopping hate for profit
Tech companies are not doing enough to fight hate on their digital social platforms. But what can be done to encourage social platforms to provide more support to people who are targets of racism and hate, and to increase safety for private groups on the platform?

Join host Marietje Schaake, International Policy Director at the Cyber Policy Center, as she brings together experts from the space, to speak about what can be done to encourage platforms like Facebook to stop the spread of hate and disinformation. 

The event is open to the public, but registration is required:

Maritje Schaake: Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. Marietje is affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India and writes a monthly column for the Financial Times and a bi-monthly column for the Dutch NRC newspaper. 

Jessica Gonzalez: An accomplished attorney and racial-justice advocate, Jessica works closely with the executive team and key stakeholders to develop and execute strategies to advance Free Press’ mission. A former Lifeline recipient, Jessica has helped fend off grave Trump administration cuts to the program, which helps provide phone-and-internet access for low-income people. She was part of the legal team that overturned a Trump FCC decision blessing runaway media consolidation. She also co-founded Change the Terms, a coalition of more than 50 civil- and digital-rights groups that works to disrupt online hate. Previously, Jessica was the executive vice president and general counsel at the National Hispanic Media Coalition, where she led the policy shop and helped coordinate campaigns against racist and xenophobic media programming. Prior to that she was a staff attorney and teaching fellow at Georgetown Law’s Institute for Public Representation. Jessica has testified before Congress on multiple occasions, including during a Net Neutrality hearing in the House while suffering from acute morning sickness, and during a Senate hearing while eight months pregnant to advocate for affordable internet access.

David Sifry: As Vice President of the Center for Technology and Society (CTS), Dave Sifry leads a team of innovative technologists, researchers, and policy experts developing proactive solutions and producing cutting-edge research to protect vulnerable populations. In its efforts to advocate change at all levels of society, CTS serves as a vital resource to legislators, journalists, universities, community organizations, tech platforms and anyone who has been a target of online hate and harassment. Dave joined ADL in 2019 after a storied career as a technology entrepreneur and executive. He founded six companies including Linuxcare and Technorati, and served in executive roles at companies including Lyft and Reddit. In addition to his entrepreneurial work, Dave was selected as a Technology Pioneer at The World Economic Forum, and is an advisor and mentor for a select group of companies and startup founders. As the son of a hidden child of the Holocaust, the core values and mission exemplified by ADL were instilled in him at an early age.

Panel Discussions
0
CDDRL Postdoctoral Scholar, 2020-21
mousa.jpg

An Egyptian-Canadian raised in Saudi Arabia, the UAE, Qatar, and Canada, Salma Mousa received her PhD in Political Science from Stanford University in 2020. A scholar of comparative politics, her research focuses on migration, conflict, and social cohesion.  Salma's dissertation investigates strategies for building trust and tolerance after war. Leveraging field experiments among Iraqis displaced by ISIS,  American schoolchildren, and British soccer fans, she shows how intergroup contact can change real-world behaviors — even if underlying prejudice remains unchanged.   A secondary research agenda tackles the challenge of integrating refugees in the United States. Combining a meta-analytic review, ethnographic fieldwork, and field experiments with resettlement agencies, this project identifies risk factors and promising policies for new arrivals.  Salma has held fellowships at the U.S. Institute of Peace, Stanford’s Immigration Policy Lab, the Freeman Spogli Institute, the Stanford Center for International Conflict and Negotiation, the McCoy Center for Ethics in Society, and the Stanford Center on Philanthropy and Civil Society. Her work has been supported by the Abdul Latif Jameel Poverty Action Lab (JPAL), the Innovations for Poverty Action Lab (IPA), the King Center on Global Development, the Institute for Research in the Social Sciences (IRiSS), the Program on Governance and Local Development (GLD), and the Abbasi Program in Islamic Studies. Her research has been featured by The Economist, BBC, and Der Spiegel,  on the front page of the Times of London and on PBS NOVA.

CV

Image
Tech and Wellbeing in the Era of Covid-19
Please join the Cyber Policy Center for Tech & Wellbeing in the Era of Covid-19 with Jeff Hancock from Stanford University, Amy Orben from Emmanuel College, and Erica Pelavin, Co-Founder of My Digital TAT2, in conversation with Kelly Born, Executive Director of the Cyber Policy Center. The session will explore the risks and opportunities technologies pose to users’ wellbeing; what we know about the impact of technology on mental health, particularly for teens; how the current pandemic may change our perceptions of technology; and ways in which teens are using apps, influencers and platforms to stay connected under Covid-19.

 

Dr. Amy Orben is College Research Fellow at Emmanuel College and the MRC Cognition and Brain Sciences Unit. Her work using large-scale datasets to investigate social media use and teenage mental health has been published in a range of leading scientific journals. The results have put into question many long-held assumptions about the potential risks and benefits of ’screen time'. Alongside her research, Amy campaigns for the use of improved statistical methodology in the behavioural sciences and the adoption of more transparent and open scientific practices, having co-founded the global ReproducibiliTea initiative. Amy also regularly contributes to both media and policy debate, having recently given evidence to the UK Commons Science and Technology Select Committee and various governmental investigations.

Jeff Hancock is founding director of the Stanford Social Media Lab and is a Professor in the Department of Communication at Stanford University. Professor Hancock and his group work on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, intimacy and relationships, and social support. Recently Professor Hancock has begun work on understanding the mental models people have about algorithms in social media, as well as working on the ethical issues associated with computational social science.

Erica Pelavin, is an educator, public speaker, and Co-Founder and Director of Teen Engagement at My Digital TAT2. Working from a strength-based perspective, Erica has expertise in bullying prevention, relational aggression, digital safety, social emotional learning, and conflict resolution. Dr. Pelavin has a passion for helping young people develop the skills to become their own advocates and cares deeply about helping school communities foster empathy and respect. In her role at My Digital TAT2, Erica leads all programming for high schoolers including the youth led podcast Media in the Middle, the teen advisory boards and an annual summer internship program. Her work with teens directly impacts and informs the developmental school based curriculum. Erica is also a high school counselor at Eastside College Prep in East Palo Alto, CA.

Watch the recorded session

News Type
Blogs
Date
Paragraphs

From the Stanford Institute for Human-Centered AI (HAI) blog:

More than 25 governments around the world, including those of the United States and across the European Union, have adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.

Yet a new analysis finds that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.

That’s a mistake, says Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator, which produced the report in conjunction with a leading international digital rights organization called Global Partners Digital.

Read More (at the HAI blog)

Hero Image
All News button
1
Subtitle

In the rush to develop national strategies on artificial intelligence, a new report finds, most governments pay lip service to civil liberties.

Authors
News Type
Q&As
Date
Paragraphs

Brett McGurk discusses the broad challenges in foreign policy making in an interview with Rodger Shanahan from The Interpreter

Watch interview at The Interpreter

All News button
1
Subtitle

US presidents tend to set maximalist objectives without necessarily providing the resourcing or laying the necessary diplomatic foundations to achieve such goals.

-

Join Cyber Policy Center, June 17rd at 10am Pacific Time for Patterns and Potential Solutions to Disinformation Sharing, Under COVID-19 and Beyond, with Josh Tucker, David Lazer and Evelyn Douek.

The session will explore which types of readers are most susceptible to fake news, whether crowdsourced fact-checking by ordinary citizens works and whether it can reduce the prevalence of false news in the information ecosystem. Speakers will also look at patterns of (mis)information sharing regarding COVID-19: Who is sharing what type of information? How has this varied over time? How much misinformation is circulating, and among whom? Finally, we'll explore how social media platforms are responding to COVID disinformation, how that differs from responses to political disinformation, and what we think they could be doing better.

Evelyn Douek is a doctoral candidate and lecturer on law at Harvard Law School, and Affiliate at the Berkman Klein Center For Internet & Society. Her research focuses on online speech governance, and the various private, national and global proposals for regulating content moderation.

David Lazer is a professor of political science and computer and information science and the co-director of the NULab for Texts, Maps, and Networks. Before joining the Northeastern faculty in fall 2009, he was an associate professor of public policy at Harvard’s John F. Kennedy School of Government and director of its Program on Networked Governance. 

Joshua Tucker is Professor of Politics, Director Jordan Center for the Advanced Study of Russia, Co-Director NYU Social Media and Political Participation (SMaPP) lab, Affiliated Professor of Russian and Slavic Studies and Affiliated Professor of Data Science.

The event is open to the public, but registration is required.

Online, via Zoom

Authors
May Wong
News Type
News
Date
Paragraphs

In combating poverty, like any fight, it’s good to know the locations of your targets.

That’s why Stanford scholars Marshall BurkeDavid Lobell and Stefano Ermon have spent the past five years leading a team of researchers to home in on an efficient way to find and track impoverished zones across Africa.

The powerful tool they’ve developed combines free, publicly accessible satellite imagery with artificial intelligence to estimate the level of poverty across African villages and changes in their development over time. By analyzing past and current data, the measurement tool could provide helpful information to organizations, government agencies and businesses that deliver services and necessities to the poor.

Details of their undertaking were unveiled in the May 22 issue of Nature Communications.

“Our big motivation is to better develop tools and technologies that allow us to make progress on really important economic issues. And progress is constrained by a lack of ability to measure outcomes,” said Burke, a faculty fellow at the Stanford Institute for Economic Policy Research (SIEPR) and an assistant professor of earth system science in the School of Earth, Energy & Environmental Sciences (Stanford Earth). “Here’s a tool that we think can help.”

Lobell, a senior fellow at SIEPR and a professor of Earth system science at Stanford Earth, says looking back is critical to identifying trends and factors to help people escape from poverty.

“Amazingly, there hasn’t really been any good way to understand how poverty is changing at a local level in Africa,” said Lobell, who is also the director of the Center on Food Security and the Environment and the William Wrigley Fellow at the Stanford Woods Institute for the Environment. “Censuses aren’t frequent enough, and door-to-door surveys rarely return to the same people. If satellites can help us reconstruct a history of poverty, it could open up a lot of room to better understand and alleviate poverty on the continent.”

The measurement tool uses satellite imagery both from the nighttime and daytime. At night, lights are an indicator of development, and during the day, images of human infrastructure such as roads, agriculture, roofing materials, housing structures and waterways, provide characteristics correlated with development.

Then the tool applies the technology of deep learning – computing algorithms that constantly train themselves to detect patterns – to create a model that analyzes the imagery data and forms an index for asset wealth, an economic component commonly used by surveyors to measure household wealth in developing nations.

The researchers tested the measuring tool’s accuracy for about 20,000 African villages that had existing asset wealth data from surveys, dating back to 2009. They found that it performed well in gauging the poverty levels of villages over different periods of time, according to their study.

Here, Burke – who is also a center fellow at the Stanford Woods Institute for the Environment and the Freeman Spogli Institute for International Studies – discusses the making of the tool and its potential to help improve the well-being of the world’s poor.

 

Why are you excited about this new technological resource?

For the first time, this tool demonstrates that we can measure economic progress and understand poverty interventions at both a local level and a broad scale. It works across Africa, across a lot of different years. It works pretty darn well, and it works in a lot of very different types of countries.

 

Can you give examples of how this new tool would be used?

If we want to understand the effectiveness of an anti-poverty program, or if an NGO wants to target a specific product to specific types of individuals, or if a business wants to understand where a market’s growing – all of those require data on economic outcomes. In many parts of the world, we just don’t have those data. Now we’re using data from across sub-Saharan Africa and training these models to take in all the data to measure for specific outcomes.

 

How does this new study build upon your previous work?

Our initial poverty-mapping work, published in 2016, was on five countries using one year of data. It relied on costly, high-resolution imagery at a much smaller, pilot scale. Now this work covers about two dozen countries – about half of the countries in Africa – using many more years of high-dimensional data. This provided underlying training datasets to develop the measurement models and allowed us to validate whether the models are making good poverty estimates.

We’re confident we can apply this technology and this approach to get reliable estimates for all the countries in Africa.

A key difference compared to the earlier work is now we’re using completely publicly available satellite imagery that goes back in time – and it’s free, which I think democratizes this technology. And we’re doing it at a comprehensive, massive spatial scale.

 

How do you use satellite imagery to get poverty estimates?

We’re building on rapid developments in the field of computer science – of deep learning – that have happened in the last five years and that have really transformed how we extract information from images. We’re not telling the machine what to look for in images; instead, we’re just telling it, “Here’s a rich place. Here is a poor place. Figure it out.”

The computer is clearly picking out urban areas, agricultural areas, roads, waterways – features in the landscape that you might think would have some predictive power in being able to separate rich areas from poor areas. The computer says, ‘I found this pattern’ and we can then assign semantic meaning to it.

These broader characteristics, examined at the village level, turn out to be highly related to the average wealth of the households in that region.

 

What’s next?

Now that we have these data, we want to use them to try to learn something about economic development. This tool enables us to address questions we were unable to ask a year ago because now we have local-level measurements of key economic outcomes at broad, spatial scale and over time.

We can evaluate why some places are doing better than other places. We can ask: What do patterns of growth in livelihoods look like? Is most of the variation between countries or within countries? If there’s variation within a country, that already tells us something important about the determinants of growth. It’s probably something going on locally.

I’m an economist, so those are the sorts of questions that get me excited. The technological development is not an end in itself. It’s an enabler for the social science that we want to do.

In addition to Burke, Lobell and Ermon, a professor of computer science, the co-authors of the published study are Christopher Yeh and Anthony Perez, both computer science graduate students and research assistants at the Stanford King Center on Global Development; Anne Driscoll, a research data analyst, and George Azzari, an affiliated scholar, both at the Center on Food Security and the Environment at Stanford; and Zhongyi Tang, a former research data analyst at the King Center. This research was supported by the Data for Development initiative at the Stanford King Center on Global Development and the USAID Bureau of Food Security. To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.

Media Contacts

Adam Gorlick, Stanford Institute for Economic Policy Research: (650) 724-0614, agorlick@stanford.edu

All News button
1
Subtitle

A new tool combines publicly accessible satellite imagery with AI to track poverty across African villages over time.

Subscribe to Middle East and North Africa