Society
Authors
Riana Pfefferkorn
Riana Pfefferkorn
News Type
Blogs
Date
Paragraphs

When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.

This technology poses a particular threat to marginalized communities. If deepfakes cause society to move away from the current “seeing is believing” paradigm for video footage, that shift may negatively impact individuals whose stories society is already less likely to believe. The proliferation of video recording technology has fueled a reckoning with police violence in the United States, recorded by bystanders and body-cameras. But in a world of pervasive, compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools meant to increase trust in videos are in development, but these technologies, though well-intentioned, could end up being used to discredit already marginalized voices. 

(Content Note: Some of the links in this piece lead to graphic videos of incidents of police violence. Those links are denoted in bold.)

Recent police killings of Black Americans caught on camera have inspired massive protests that have filled U.S. streets in the past year. Those protests endured for months in Minneapolis, where former police officer Derek Chauvin was convicted this week in the murder of George Floyd, a Black man. During Chauvin’s trial, another police officer killed Daunte Wright just outside Minneapolis, prompting additional protests as well as the officer’s resignation and arrest on second-degree manslaughter charges. She supposedly mistook her gun for her Taser—the same mistake alleged in the fatal shooting of Oscar Grant in 2009, by an officer whom a jury later found guilty of involuntary manslaughter (but not guilty of a more serious charge). All three of these tragic deaths—George Floyd, Daunte Wright, Oscar Grant—were documented in videos that were later used (or, in Wright’s case, seem likely to be used) as evidence at the trials of the police officers responsible. Both Floyd’s and Wright’s deaths were captured by the respective officers’ body-worn cameras, and multiple bystanders with cell phones recorded the Floyd and Grant incidents. Some commentators credit a 17-year-old Black girl’s video recording of Floyd’s death for making Chauvin’s trial happen at all.

The growth of the movement for Black lives in the years since Grant’s death in 2009 owes much to the rise in the availability, quality, and virality of bystander videos documenting police violence, but this video evidence hasn’t always been enough to secure convictions. From Rodney King’s assailants in 1992 to Philando Castile’s shooter 25 years later, juries have often declined to convict police officers even in cases where wanton police violence or killings are documented on video. Despite their growing prevalence, police bodycams have had mixed results in deterring excessive force or impelling accountability. That said, bodycam videos do sometimes make a difference, helping to convict officers in the killings of Jordan Edwards in Texas and Laquan McDonald in Chicago. Chauvin’s defense team pitted bodycam footage against the bystander videos employed by the prosecution, and lost.

What makes video so powerful? Why does it spur crowds to take to the streets and lawyers to showcase it in trials? It’s because seeing is believing. Shot at differing angles from officers’ point of view, bystander footage paints a fuller picture of what happened. Two people (on a jury, say, or watching a viral video online) might interpret a video two different ways. But they’ve generally been able to take for granted that the footage is a true, accurate record of something that really happened. 

That might not be the case for much longer. It’s now possible to use artificial intelligence to generate highly realistic “deepfake” videos showing real people saying and doing things they never said or did, such as the recent viral TikTok videos depicting an ersatz Tom Cruise. You can also find realistic headshots of people who don’t exist at all on the creatively-named website thispersondoesnotexist.com. (There’s even a cat version.) 

While using deepfake technology to invent cats or impersonate movie stars might be cute, the technology has more sinister uses as well. In March, the Federal Bureau of Investigation issued a warning that malicious actors are “almost certain” to use “synthetic content” in disinformation campaigns against the American public and in criminal schemes to defraud U.S. businesses. The breakneck pace of deepfake technology’s development has prompted concerns that techniques for detecting such imagery will be unable to keep up. If so, the high-tech cat-and-mouse game between creators and debunkers might end in a stalemate at best. 

If it becomes impossible to reliably prove that a fake video isn’t real, a more feasible alternative might be to focus instead on proving that a real video isn’t fake. So-called “verified at capture” or “controlled-capture” technologies attach additional metadata to imagery at the moment it’s taken, to verify when and where the footage was recorded and reveal any attempt to tamper with the data. The goal of these technologies, which are still in their infancy, is to ensure that an image’s integrity will stand up to scrutiny. 

Photo and video verification technology holds promise for confirming what’s real in the age of “fake news.” But it’s also cause for concern. In a society where guilty verdicts for police officers remain elusive despite ample video evidence, is even more technology the answer? Or will it simply reinforce existing inequities? 

The “ambitious goal” of adding verification technology to smartphone chipsets necessarily entails increasing the cost of production. Once such phones start to come onto the market, they will be more expensive than lower-end devices that lack this functionality. And not everyone will be able to afford them. Black Americans and poor Americans have lower rates of smartphone ownership than whites and high earners, and are more likely to own a “dumb” cell phone. (The same pattern holds true with regard to educational attainment and urban versus rural residence.) Unless and until verification technology is baked into even the most affordable phones, it risks replicating existing disparities in digital access. 

That has implications for police accountability, and, by extension, for Black lives. Primed by societal concerns about deepfakes and “fake news,” juries may start expecting high-tech proof that a video is real. That might lead them to doubt the veracity of bystander videos of police brutality if they were captured on lower-end phones that lack verification technology. Extrapolating from current trends in phone ownership, such bystanders are more likely to be members of marginalized racial and socioeconomic groups. Those are the very people who, as witnesses in court, face an uphill battle in being afforded credibility by juries. That bias, which reared its ugly head again in the Chauvin trial, has long outlived the 19th-century rules that explicitly barred Black (and other non-white) people from testifying for or against white people on the grounds that their race rendered them inherently unreliable witnesses. 

In short, skepticism of “unverified” phone videos may compound existing prejudices against the owners of those phones. That may matter less in situations where a diverse group of numerous eyewitnesses record a police brutality incident on a range of devices. But if there is only a single bystander witness to the scene, the kind of phone they own could prove significant.

The advent of mobile devices empowered Black Americans to force a national reckoning with police brutality. Ubiquitous, pocket-sized video recorders allow average bystanders to document the pandemic of police violence. And because seeing is believing, those videos make it harder for others to continue denying the problem exists. Even with the evidence thrust under their noses, juries keep acquitting police officers who kill Black people. Chauvin’s conviction this week represents an exception to recent history: Between 2005 and 2019, of the 104 law enforcement officers charged with murder or manslaughter in connection with a shooting while on duty, 35 were convicted

The fight against fake videos will complicate the fight for Black lives. Unless it is equally available to everyone, video verification technology may not help the movement for police accountability, and could even set it back. Technological guarantees of videos’ trustworthiness will make little difference if they are accessible only to the privileged, whose stories society already tends to believe. We might be able to tech our way out of the deepfakes threat, but we can’t tech our way out of America’s systemic racism. 

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory

Read More

Riana Pfefferkorn
News

Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar

Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.
Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar
A member of the All India Student Federation teaches farmers about social media and how to use such tools as part of ongoing protests against the government. (Pradeep Gaur / SOPA Images / Sipa via Reuters Connect)
Blogs

New Intermediary Rules Jeopardize the Security of Indian Internet Users

New Intermediary Rules Jeopardize the Security of Indian Internet Users
All News button
1
Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs

I am a huge fan of transparency about platform content moderation. I’ve considered it a top policy priority for years, and written about it in detail (with Paddy Leerssen, who also wrote this great piece about recommendation algorithms and transparency). I sincerely believe that without it, we are unlikely to correctly diagnose current problems or arrive at wise legal solutions.

So it pains me to admit that I don’t really know what “transparency” I’m asking for. I don’t think many other people do, either. Researchers and public interest advocates around the world can agree that more transparency is better. But, aside from people with very particular areas of interest (like political advertising), almost no one has a clear wish list. What information is really important? What information is merely nice to have? What are the trade-offs involved?

That imprecision is about to become a problem, though it’s a good kind of problem to have. A moment of real political opportunity is at hand. Lawmakers in the USEurope, and elsewhere are ready to make some form of transparency mandatory. Whatever specific legal requirements they create will have huge consequences. The data, content, or explanations they require platforms to produce will shape our future understanding of platform operations, and our ability to respond — as consumers, as advocates, or as democracies. Whatever disclosures the laws don’t require, may never happen.

It’s easy to respond to this by saying “platforms should track all the possible data, we’ll see what’s useful later!” Some version of this approach might be justified for the very biggest “gatekeeper” or “systemically important” platforms. Of course, making Facebook or Google save all that data would be somewhat ironic, given the trouble they’ve landed in by storing similar not-clearly-needed data about their users in the past. (And the more detailed data we store about particular takedowns, the likelier it is to be personally identifiable.)

For any platform, though, we should recognize that the new practices required for transparency reporting comes at a cost. That cost might include driving platforms to adopt simpler, blunter content rules in their Terms of Service. That would reduce their expenses in classifying or explaining decisions, but presumably lead to overly broad or narrow content prohibitions. It might raise the cost of adding “social features” like user comments enough that some online businesses, like retailers or news sites, just give up on them. That would reduce some forms of innovation, and eliminate useful information for Internet users. For small and midsized platforms, transparency obligations (like other expenses related to content moderation) might add yet another reason to give up on competing with today’s giants, and accept an acquisition offer from an incumbent that already has moderation and transparency tools. Highly prescriptive transparency obligations might also drive de facto standardization and homogeneity in platform rules, moderation practices, and features.

None of these costs provides a reason to give up on transparency — or even to greatly reduce our expectations. But all of them are reasons to be thoughtful about what we ask for. It would be helpful if we could better quantify these costs, or get a handle on what transparency reporting is easier and harder to do in practice.

I’ve made a (very in the weeds) list of operational questions about transparency reporting, to illustrate some issues that are likely to arise in practice. I think detailed examples like these are helpful in thinking through both which kinds of data matter most, and how much precision we need within particular categories. For example, I personally want to know with great precision how many government orders a platform received, how it responded, and whether any orders led to later judicial review. But to me it seems OK to allow some margin of error for platforms that don’t have standardized tracking and queuing tools, and that as a result might modestly mis-count TOS takedowns (either by absolute numbers or percent).

I’ll list that and some other recommendations below. But these “recommendations” are very tentative. I don’t know enough to have a really clear set of preferences yet. There are things I wish I could learn from technologists, activists, and researchers first. The venues where those conversations would ordinarily happen — and, importantly, where observers from very different backgrounds and perspectives could have compared the issues they see, and the data they most want — have been sadly reduced for the past year.

So here is my very preliminary list:

  • Transparency mandates should be flexible enough to accommodate widely varying platform practices and policies. Any de facto push toward standardization should be limited to the very most essential data.
  • The most important categories of data are probably the main ones listed in the DSA: number of takedowns, number of appeals, number of successful appeals. But as my list demonstrates, those all can become complicated in practice.
  • It’s worth taking the time to get legal transparency mandates right. That may mean delegating exact transparency rules to regulatory agencies in some countries, or conducting studies prior to lawmaking in others.
  • Once rules are set, lawmakers should be very reluctant to move the goalposts. If a platform (especially a smaller one) invests in rebuilding its content moderation tools to track certain categories of data, it should not have to overhaul those tools soon because of changed legal requirements.
  • We should insist on precise data in some cases, and tolerate more imprecision in others (based on the importance of the issue, platform capacity, etc.). And we should take the time to figure out which is which.
  • Numbers aren’t everything. Aggregate data in transparency reports ultimately just tell us what platforms themselves think is going on. To understand what mistakes they make, or what biases they may exhibit, independent researchers need to see the actual content involved in takedown decisions. (This in turn raises a slough of issues about storing potentially unlawful content, user privacy and data protection, and more.)

It’s time to prioritize. Researchers and civil society should assume we are operating with a limited transparency “budget,” which we must spend wisely — asking for the information we can best put to use, and factoring in the cost. We need better understanding of both research needs and platform capabilities to do this cost-benefit analysis well. I hope that the window of political opportunity does not close before we manage to do that.

Daphne Keller

Daphne Keller

Director of the Program on Platform Regulation
BIO

Read More

Cover of the EIP report "The Long Fuse: Misinformation and the 2020 Election"
News

Election Integrity Partnership Releases Final Report on Mis- and Disinformation in 2020 U.S. Election

Researchers from Stanford University, the University of Washington, Graphika and Atlantic Council’s DFRLab released their findings in ‘The Long Fuse: Misinformation and the 2020 Election.’
Election Integrity Partnership Releases Final Report on Mis- and Disinformation in 2020 U.S. Election
Daphne Keller QA
Q&As

Q&A with Daphne Keller of the Program on Platform Regulation

Keller explains some of the issues currently surrounding platform regulation
Q&A with Daphne Keller of the Program on Platform Regulation
twitter takedown headliner
Blogs

Analysis of February 2021 Twitter Takedowns

In this post and in the attached reports we investigate a Twitter network attributed to actors in Armenia, Iran, and Russia.
Analysis of February 2021 Twitter Takedowns
All News button
1
Subtitle

In a new blog post, Daphne Keller, Director of the Program on Platform Regulation at the Cyber Policy Center, looks at the need for transparency when it comes to content moderation and asks, what kind of transparency do we really want?

-

End-to-end encrypted (E2EE) communications have been around for decades, but the deployment of default E2EE on billion-user platforms has new impacts for user privacy and safety. The deployment comes with benefits to both individuals and society but it also creates new risks, as long-existing models of messenger abuse can now flourish in an environment where automated or human review cannot reach. New E2EE products raise the prospect of less understood risks by adding discoverability to encrypted platforms, allowing contact from strangers and increasing the risk of certain types of abuse. This workshop will place a particular focus on platform benefits and risks that impact civil society organizations, with a specific focus on the global south. Through a series of workshops and policy papers, the Stanford Internet Observatory is facilitating open and productive dialogue on this contentious topic to find common ground. 

An important defining principle behind this workshop series is the explicit assumption that E2EE is here to stay. To that end, our workshops have set aside any discussion of exceptional access (aka backdoor) designs. This debate has raged between industry, academic cryptographers and law enforcement for decades and little progress has been made. We focus instead on interventions that can be used to reduce the harm of E2E encrypted communication products that have been less widely explored or implemented. 

Submissions for working papers and requests to attend will be accepted up to 10 days before the event. Accepted submitters will be invited to present or attend our upcoming workshops. 

SUBMIT HERE

Webinar

Workshops
-
Roberta Gatti ARD event

The Middle East and North Africa (MENA) economies are not catching up with the rest of the world. The region’s average per capita income has increased by just 62 percent over the last 50 years. In comparison, over the same period, the increase was fourfold in emerging market and developing economies (EMDEs) and twofold in advanced ones. Only a few developing MENA economies have avoided diverging further from the richest countries’ living standards (what economists call the frontier), and those where conflicts erupted have accelerated in the wrong direction. In this presentation, Roberta Gatti will discuss the factors that shape MENA’s long-term growth potential, with special attention to the role of the state in the economy, the persistent effects of conflict, and the boost that closing the gender gap in the labor force can deliver in terms of growth.

This event is co-sponsored by the Program on Arab Reform and Development and the Program on Capitalism and Democracy, as well as the Middle Eastern Studies Forum.

ABOUT THE SPEAKER

Roberta Gatti is the Chief Economist for the Middle East and North Africa (MENA) region at the World Bank, where she oversees the analytical agenda of the region and the publication of the semi-annual MENA Economic Updates. She is the founder of the MENA Central Banks Regional Research Network. In her prior capacity as Chief Economist for the Human Development Practice Group, Roberta co-led the conceptualization and launch of the World Bank Human Capital Index and the scale up of the Service Delivery Indicators data initiative.

Roberta joined the World Bank as a Young Professional in the Macro unit of the Development Research Group, and she has since led and overseen both operational and analytical work in her roles of Manager and of Global Lead for Labor Policies.

Roberta’s research, spanning a broad set of topics such as growth, firm productivity, the economics of corruption, gender equity, and labor markets, has been published in lead field journals such as the Journal of Public Economics, the Journal of Economic Growth, and the Journal of Development Economics. She is also the lead author of a number of flagship reports, including Jobs for Shared Prosperity: Time for Action in the Middle East and North Africa; Striving for Better Jobs: The Challenge of Informality in Middle East and North Africa; The Human Capital Index 2020 Update: Human Capital in the Time of COVID-19; and Service Delivery in Education and Health across Africa.

Roberta has taught courses at the undergraduate, masters, and Ph.D. Level at Georgetown and Johns Hopkins Universities. She is a frequent lecturer on development economics, most recently at Dartmouth College, Princeton University, and Cornell University. Roberta holds a B.A. from Università Bocconi and a Ph.D. in Economics from Harvard University.

In-person: Encina Hall E008, Garden-level East (616 Jane Stanford Way Stanford)

Online: Via Zoom

Roberta Gatti
Seminars
Date Label
-
Natalia Forrat seminar 2025

Why are some authoritarian regimes highly competitive and others highly unified? Do they function differently? And what does it mean for our understanding of democracy and democratization? The Social Roots of Authoritarianism unpacks the grassroots mechanisms maintaining unity-based and division-based authoritarianisms. They develop in societies with opposite visions: the state as team leader or the state as outsider. Depending on which vision of the state is dominant in society, autocrats must use different tools to consolidate their regimes or risk pushback. The book demonstrates the grassroots mechanisms of authoritarian power comparing four Russian regions with opposite patterns of electoral performance—the Rostov region, the Kemerovo region, the Republic of Tatarstan, and the Republic of Altai. The theory of unity- and division-based authoritarianisms developed in the book implies that these types of authoritarian regimes miss the opposite elements of democracy, and that democratization depends on cultivating these missing institutions over time.

ABOUT THE SPEAKER

Natalia Forrat is a social scientist studying democracy, authoritarianism, state power, and civil society. She obtained her PhD from Northwestern University and held academic appointments at Stanford University, the University of Notre Dame, and the University of Michigan. Currently, she is a lecturer at the Center for Russian, East European, and Eurasian Studies at the University of Michigan.  

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Hesham Sallam
Hesham Sallam

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Natalia Forrat
Seminars
Date Label
-
Clémence Tricaud seminar

We assemble a comprehensive database of historical electoral results for the US House, Senate and presidential contests, from the 19th century until today. We analyze long run trends in election vote margins and party seat margins. Seat margins declined in the recent period, so the margins of control of the House, Senate, and Electoral College by either party have become smaller. However, this was not accompanied by a decline in the margins of victory at the constituency level. We interpret these facts in the context of a simple model of electoral competition where seat margins and vote margins depend on the availability of information about voter preferences, as well as the ability of political candidates to tailor their platforms locally. We argue that the gradual increase in politicians' information about voter preferences, as well as the growing nationalization of politics can explain the long-run decrease in seat margins and the concurrent stability in vote margins.

ABOUT THE SPEAKER

Clemence Tricaud is an Assistant Professor of Economics at the UCLA Anderson School of Management. She is also a research affiliate of the NBER and CEPR. She received her Ph.D in Economics from Ecole Polytechnique and CREST in 2020. Her research lies at the intersection of political economy and public economics. Her work combines quasi-experimental designs with administrative data to better understand the determinants and consequences of citizen and policymaker behaviors. The first part of her research studies the factors affecting voters' and candidates' behavior during elections and the consequences of their choices on electoral outcomes. The second part of her work explores how the identity of policymakers and the level of governance affect the design of local public policies and the provision of public goods.

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Hesham Sallam
Hesham Sallam

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Clémence Tricaud Assistant Professor, UCLA Anderson
Seminars
Date Label
-
Danila Serra

We examine the impact of ethics and integrity training on police officers in Ghana through a randomized field experiment. The program, informed by theoretical work on the role of identity and motivation in organizations, aimed to re-activate intrinsic motivations to serve the public, and to create a new shared identity of "Agent of Change." Data generated by an endline survey conducted 20 months post training, show that the program positively affected officers' values and beliefs regarding on-the-job unethical behavior and improved their attitudes toward citizens. The training also lowered officers' propensity to behave unethically, as measured by an incentivized cheating game conducted at endline. District-level administrative data for a subsample of districts are consistent with a significant impact of the program on officers' field behavior in the short-run.

ABOUT THE SPEAKER

Danila Serra is Associate Professor of Economics at Texas A&M University. She received her PhD in Economics from the University of Oxford. She is an applied behavioral economist employing experimental methods to address policy-relevant questions in political economy, development, education, and gender economics. Her work has been funded by the Alfred P. Sloan Foundation, the Russell Sage Foundation, the Spencer Foundation, the World Bank, the IZA G²LM|LIC program, the Social Science Research Council (SSRC), the Abdul Latif Jameel Poverty Action Lab (JPAL) and the Arnold Foundation. In 2017, she was the inaugural recipient of the Vernon Smith Ascending Scholar Prize, given by the International Foundation for Research in Experimental Economics (IFREE) to an exceptional scholar using experiments in economics research.

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Hesham Sallam
Hesham Sallam

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Danila Serra
Seminars
Date Label
-
Soledad Artiz Prillaman seminar — Does Affirmative Action Worsen Quality? Theory and Evidence to the Contrary from Elections

Affirmative action improves the representation of women and minorities, but critics worry that it is at odds with meritocracy. We argue that quotas can improve quality under conditions of discrimination, as quota recipients are held to a higher standard despite facing structural inequalities that make meeting these standards difficult. The net effect of quotas on observable proxies for quality -- qualifications -- therefore depends on the degrees of selection and structural discrimination. We test our argument by examining the effects of electoral quotas on politicians' education and quality in India. Using two censuses covering more than 40 million residents and 13 states, we show that randomly and quasi-randomly assigned quota politicians have lower average education than non-quota politicians but the same or higher quality. We further provide evidence of both voter and structural discrimination. Our results show that quotas can both enhance the representativeness and quality of politicians.

ABOUT THE SPEAKER

Soledad Artiz Prillaman is an assistant professor of political science at Stanford University. Her research lies at the intersections of comparative political economy, development, and gender, with a focus in South Asia. She investigates the political consequences of development; the political behavior and representation of minorities, specifically women; inequalities in political engagement; and the translation of voter demands. She is the faculty director of the Inclusive Democracy and Development Lab and recently published a book with Cambridge University Press titled "The Patriarchal Political Order: The Making and Unraveling of the Gendered Participation Gap in India."

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Hesham Sallam
Hesham Sallam

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Soledad Artiz Prillaman Assistant Professor, Stanford University
Seminars
Date Label
-
Francis Fukuyama seminar April 3, 2025

Delegation of authority is central to the functioning of bureaucracies and, indeed, to political institutions as a whole. It is today at the center of the contemporary assault on the "administrative state," and its importance is widely misunderstood. In this seminar, Francis Fukuyama will discuss how a well-functioning government needs to provide bureaucrats with sufficient authority and that this is something that the US has failed to do.

ABOUT THE SPEAKER

Francis Fukuyama is Olivier Nomellini Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies (FSI), and a faculty member of FSI's Center on Democracy, Development, and the Rule of Law (CDDRL). He is also Director of Stanford's Ford Dorsey Master’s in International Policy Program, and a professor (by courtesy) of Political Science.

Dr. Fukuyama has written widely on issues in development and international politics. His 1992 book, The End of History and the Last Man, has appeared in over twenty foreign editions. His most recent book,  Liberalism and Its Discontents, was published in the spring of 2022.

Francis Fukuyama received his B.A. from Cornell University in classics, and his Ph.D. from Harvard in Political Science. He was a member of the Political Science Department of the RAND Corporation and of the Policy Planning Staff of the US Department of State. From 1996-2000 he was Omer L. and Nancy Hirst Professor of Public Policy at the School of Public Policy at George Mason University, and from 2001-2010 he was Bernard L. Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University. He served as a member of the President’s Council on Bioethics from 2001-2004.  

Dr. Fukuyama holds honorary doctorates from Connecticut College, Doane College, Doshisha University (Japan), Kansai University (Japan), Aarhus University (Denmark), and the Pardee Rand Graduate School. He is a non-resident fellow at the Carnegie Endowment for International Peace. He is a member of the Board of Trustees of the Rand Corporation, the Board of Trustees of Freedom House, and the Board of the Volcker Alliance. He is a fellow of the National Academy for Public Administration, a member of the American Political Science Association, and of the Council on Foreign Relations. He is married to Laura Holmgren and has three children.

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Hesham Sallam
Hesham Sallam

Virtual to Public. Only those with an active Stanford ID with access to Room E008 in Encina Hall may attend in person.

Encina Hall, C148
616 Jane Stanford Way
Stanford, CA 94305

0
Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies
Director of the Ford Dorsey Master's in International Policy
Research Affiliate at The Europe Center
Professor by Courtesy, Department of Political Science
yff-2021-14290_6500x4500_square.jpg

Francis Fukuyama is Olivier Nomellini Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies (FSI), and a faculty member of FSI's Center on Democracy, Development, and the Rule of Law (CDDRL). He is also Director of Stanford's Ford Dorsey Master’s in International Policy Program, and a professor (by courtesy) of Political Science.

Dr. Fukuyama has written widely on issues in development and international politics. His 1992 book, The End of History and the Last Man, has appeared in over twenty foreign editions. His most recent book,  Liberalism and Its Discontents, was published in the spring of 2022.

Francis Fukuyama received his B.A. from Cornell University in classics, and his Ph.D. from Harvard in Political Science. He was a member of the Political Science Department of the RAND Corporation and of the Policy Planning Staff of the US Department of State. From 1996-2000 he was Omer L. and Nancy Hirst Professor of Public Policy at the School of Public Policy at George Mason University, and from 2001-2010 he was Bernard L. Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University. He served as a member of the President’s Council on Bioethics from 2001-2004.  

Dr. Fukuyama holds honorary doctorates from Connecticut College, Doane College, Doshisha University (Japan), Kansai University (Japan), Aarhus University (Denmark), and the Pardee Rand Graduate School. He is a non-resident fellow at the Carnegie Endowment for International Peace. He is a member of the Board of Trustees of the Rand Corporation, the Board of Trustees of Freedom House, and the Board of the Volcker Alliance. He is a fellow of the National Academy for Public Administration, a member of the American Political Science Association, and of the Council on Foreign Relations. He is married to Laura Holmgren and has three children.

(October 2024)

CV
Date Label
Francis Fukuyama
Seminars
Date Label
-
Turkey in Syria: The Intersection of Domestic Politics and Changing International Dynamics

The recent regime change in Syria and Turkey’s role in this process underscores a complex interplay of geopolitical shifts in the Middle East, the regional ambitions of the Erdoğan government, and its domestic political calculations. This panel will analyze the political trajectory of Syria under its new leadership and its evolving relationship with Turkey while also examining Turkey’s policy toward Kurdish demands on both sides of the border and the recently revived talks with the imprisoned PKK leader.

This event is co-sponsored by The Program on Turkey at the Center on Democracy, Development and the Rule of Law and the Middle Eastern Studies Forum.

panelists

Evren Balta

Evren Balta

Visiting Scholar, Weatherhead Scholars Program and Professor, Department of International Relations, Özyeğin University
bio
Nora Barakat

Nora Barakat

Assistant Professor of History, Stanford Department of History
bio
 Halil İbrahim Yenigün

Halil İbrahim Yenigün

Associate Director, Abbasi Program in Islamic Studies
bio
Ayça Alemdaroğlu
Ayça Alemdaroğlu

William J. Perry Conference Room (Encina Hall, 2nd floor, 616 Jane Stanford Way, Stanford)

Evren Balta
Nora Barakat
Halil İbrahim Yenigün
Panel Discussions
Date Label
Subscribe to Society