Information Technology
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’ Since the last Community Forum, the development of Generative AI has moved beyond AI chatbots and users have begun to explore the use of AI agents — a type of AI that can respond to written or verbal prompts by performing actions for you, or on your behalf. And beyond text-generating AI, users have begun to explore multimodal AI, where tools are able to generate content images, videos, and audio as well. The growing landscape of Generative AI raises more questions about users’ preferences when it comes to interacting with AI agents. This Community Forum focused deliberations on how interactive and proactive AI agents should be when engaging with users. Participants considered a variety of tradeoffs regarding consent, transparency, and human-like behaviors of AI agents. These deliberations shed light on what users are thinking now amidst the changing technology landscape in Generative AI.

For this deliberation, nearly 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey, participated in this deliberative event. The samples of each of these countries were recruited independently, so this Community Forum should be seen as five independent deliberations. In addition, 1,033 persons participated in the control group, where the participants did not deliberate in any discussions; the control group only completed the two surveys, pre and post. The main purpose of the control group is to demonstrate that any changes that occur after deliberation are a result of the deliberative event.

All Publications button
1
Publication Type
Reports
Publication Date
Subtitle

April 2025

Authors
James S. Fishkin
Alice Siu
Authors
News Type
News
Date
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

At a high level, Meta used this Forum to:

  • Expand public input into AI development beyond the Global North, and into the Global South. This latest Forum involved roughly 1,000 people from India, Turkey, Nigeria, Saudi Arabia, and South Africa.
  • Push the boundaries on the topics that the public will have input into. We moved from the foundational principles people wanted to see in GenAI towards addressing specific value and risk tradeoffs associated with issues like personalization and human-like AI.


The Forum resulted in several key findings on the principles that should underpin AI agents, including:
 

  • Participants supported AI agents remembering their prior conversations to personalize their experience, especially if transparency and user controls are in place.
  • Participants were more supportive of culturally/regionally-tailored AI agents compared to standardized AI agents.
  • Participants were in favor of human-like AI agents that can respond to emotional cues.
  • Across topics, participants consistently favored options for AI to include transparency and user control features.

Maturing our Community Forum Program


Beyond the findings of any one Forum, the Deliberative Democracy Lab and Meta have heard important feedback from stakeholders and have implemented several programmatic changes to mature our program. These include:
 

  • More disclosure around the impact of results: Meta will share more information about how results are being actioned within the company on its Transparency Center page, which will be updated throughout the year.
  • Following up with participants: We heard the importance of going back to participants to explain what we learned from their input and what we are doing with it. The Deliberative Democracy Lab will be hosting calls with participants from each of our past Community Forums, dating back to 2022, to update them on the findings from the Forum and Meta’s response.
  • Supporting AI deliberation: A team of Meta AI experts has begun partnering with the Deliberative Democracy Lab to conduct research on how AI might further scale deliberation and optimize the Community Forum process. This includes, but is not limited to, using AI to aggregate themes that are emerging in discussions in real time and support engagement between participants and experts in plenary sessions.
  • Supporting external research: Meta is supporting a consortium of independent researchers from around the world who will evaluate the data from its Forums and publish research papers on the deliberations and results. This will culminate in an academic conference later this year.

Read More

Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
All News button
1
Subtitle

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

Date Label
Paragraphs

Economic growth is uneven within many developing countries as some sectors and industries grow faster than others. India is no exception, where anemic performance in manufacturing has been offset by robust growth in services. Standard scholarly explanations fail to explain this kind of variation. For instance, the factor endowments that are required for services—such as an educated workforce or access to electricity and other infrastructure—should also complement manufacturing. Reciprocally, if a state’s institutions hold back manufacturing, they should also impair growth in services. Why have services in India outperformed manufacturing? We examine India’s performance in the computing industry, where a dynamic software services sector has emerged even as its computer hardware manufacturing sector has flagged. We argue that the uneven outcomes between the software and hardware sectors are due to the variable needs of the respective sectors and the state’s capacity to coordinate agencies. The policies required to promote the software sector needed minimal coordination between state agencies, whereas the computer hardware sector required a more centralized state apparatus for successful state-business engagement. Domestic and transnational political networks were critical for the success of the software sector, but similar networks could not deliver the same benefits to the computer hardware industry, which required more coordination-intensive policies than software. A state’s ability to coordinate industrial policy is thus a critical determinant for effective sectoral political networks, shaping sectoral variations within an economy.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Studies in Comparative International Development
Authors
Paragraphs

The Chinese government is revolutionizing digital surveillance at home and exporting these technologies abroad. Do these technology transfers help recipient governments expand digital surveillance, impose internet shutdowns, filter the internet, and target repression for online content? We focus on Huawei, the world’s largest telecommunications provider, which is partly state-owned and increasingly regarded as an instrument of its foreign policy. Using a global sample and an identification strategy based on generalized synthetic controls, we show that the effect of Huawei transfers depends on preexisting political institutions in recipient countries. In the world’s autocracies, Huawei technology facilitates digital repression. We find no effect in the world’s democracies, which are more likely to have laws that regulate digital privacy, institutions that punish government violations, and vibrant civil societies that step in when institutions come under strain. Most broadly, this article advances a large literature about the geopolitical implications of China’s rise.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Perspectives on Politics
Authors
Number
Published online 2025:1-20
Paragraphs

We are on the verge of a revolution in public sector decision-making processes, where computers will take over many of the governance tasks previously assigned to human bureaucrats. Governance decisions based on algorithmic information processing are increasing in numbers and scope, contributing to decisions that impact the lives of individual citizens. While significant attention in the recent few years has been devoted to normative discussions on fairness, accountability, and transparency related to algorithmic decision-making based on artificial intelligence, less is known about citizens’ considered views on this issue. To put society in-the-loop, a Deliberative Poll was thus carried out on the topic of using artificial intelligence in the public sector, as a form of in-depth public consultation. The three use cases that were selected for deliberation were refugee reallocation, a welfare-to-work program, and parole. A key finding was that after having acquired more knowledge about the concrete use cases, participants were overall more supportive of using artificial intelligence in the decision processes. The event was set up with a pretest/post-test control group experimental design, and as such, the results offer experimental evidence to extant observational studies showing positive associations between knowledge and support for using artificial intelligence.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
AI & SOCIETY
Authors
James S. Fishkin
Alice Siu
-
AI in Education Deliberative Poll for High School Educators
|

Are you worried about the impact AI can have on your classroom or excited about its potential? Do you wonder how you can utilize AI in your teaching or do you feel like it dehumanizes the learning process? Are you eager to learn about what “Artificial Intelligence” entails and how it can impact your classroom? 

If any of these questions have crossed your mind, we invite you to join Stanford's Deliberative Democracy Lab on Saturday, May 18, from 10:00 am to 2:45 pm (Pacific Time) to discuss with fellow educators how AI should be used and regulated in schools. You will discuss policies regarding the use of AI in schools — whether it should be banned from the Wi-Fi or left up to teachers and students to discern what “appropriate usage” means. You will also get to meet and ask questions to experts in the fields.

This will be an online event hosted on Stanford's Online Deliberation Platform. There will be sessions between deliberating teachers and expert panels where there will be Q&A time. Further details will be emailed to you.

SCHEDULE

10:00 am - 11:15 am: First Small Group Deliberation Session

11:15 am - 12:00 pm: Plenary Session 1

12:00 pm - 12:45 pm: Break

12:45 pm - 2:00 pm: Second Small Group Deliberation Session

2:00 pm - 2:45 pm: Plenary Session 2

This event is being led by students at The Quarry Lane School, Saratoga High School, and Lynbrook High School.

Online.

Open to high school educators only.

Workshops
Paragraphs

As the Russian government seeks to improve its economic performance, it must pay greater attention to the role of technology and digitalization in stimulating the Russian economy. While digitalization presents many opportunities for the Russian economy, a few key challenges – cumbersome government regulations and an unequal playing field for foreign companies – restrict Russia's potential in digitalization. In the future, how the Russian government designs its technology and regulatory policies will likely have significant impact both on the domestic front, as well as on their international initiatives and relationships. This paper provides an overview of recent Russian digital initiatives, the regulatory barriers for U.S. technological companies in Russia, and the intellectual property challenges for doing business in Russia. This paper also discusses recent digital initiatives from China, the United States, and other countries, and discusses what such programs mean for Russia. In this context, we also discuss Chinese and U.S. efforts to shape the future of global technological standards, alongside new programs from countries like Chile and Estonia, to attract foreign startup companies. Finally, this paper discusses the future challenges that the Russian government needs to address in order to improve its digital business environment. The paper concludes by providing some recommendations for designing market-friendly regulations, creating a level-playing field for foreign businesses in Russia, promoting Russian engagement with Western companies and governments, and undertaking more outreach efforts to make Russia's digital business environment more inclusive.

All Publications button
1
Publication Type
Conference Memos
Publication Date
Journal Publisher
The Stanford US-Russia Journal
Authors
Number
No. 1
Authors
Rachel Owens
News Type
News
Date
Paragraphs

In a CDDRL seminar series talk, Daniel Chen — Director of Research at the French National Center for Scientific Research and Professor at the Toulouse School of Economics — examined whether data science can improve the functioning of courts and unlock their impact on economic development. Improving courts’ efficiency is paramount to citizens' confidence in legal institutions and proceedings.

In a nationwide experiment in Kenya, Chen and his co-authors employed data science techniques to identify the causes of case backlog in the judicial system. They developed an algorithm to identify major sources of court delays for each of Kenya’s 124 court stations. Based on the algorithm, they compiled a one-page report — specific to the local court and tailored to that month’s proceedings — which provided an analysis of court adjournments, reasons for delay, and tangible action items.

To measure the effect of these one-pagers, Chen established two treatment groups and one control. Those in the first treatment group received a singular one-pager, sent just to the courts. The second received one for the courts and one for a Court User Committee (CUC). The committee, which consists of lawyers, police, and members of civil society, was asked to discuss the one-pagers during their quarterly meetings. 

To measure the relevant effects, the authors examined three primary outcomes, namely: (1) adjournment (or case delay) rates; (2) quality and citizen satisfaction; and (3) measures of economic development, including contracting, investment, and business creation. 

Results showed the intervention was associated with a 22 percent improvement in adjournments, or a decline in trial length by 120 days. They found that there was no effect on either the number of cases filed or the proxies for quality. Citizen satisfaction rates also went up, with a reduction in complaints about speed and quality, and the intervention was associated with an increase in formal written contracts and higher wages.

Read More

María Ignacia Curiel presents during CDDRL's research seminar
News

Do Institutional Safeguards Undermine Rebel Parties?

CDDRL postdoctoral fellow’s findings show that institutional safeguards meant to guarantee the representation of parties formed by former rebel groups may actually weaken such parties’ grassroots support.
Do Institutional Safeguards Undermine Rebel Parties?
Larry Diamond speaks during CDDRL's research seminar
News

Is the World Still in a Democratic Recession?

Is the world still in a democratic recession? Larry Diamond — the Mosbacher Senior Fellow in Global Democracy at FSI — believes it is.
Is the World Still in a Democratic Recession?
Janka Deli presents during CDDRL seminar
News

Can Markets Save the Rule of Law?: Insights from the EU

CDDRL postdoctoral fellow challenges the conventional wisdom that deterioration in the rule of law generates decline in economic vitality.
Can Markets Save the Rule of Law?: Insights from the EU
All News button
1
Subtitle

Improving courts’ efficiency is paramount to citizens' confidence in legal institutions and proceedings, explains Daniel Chen, Director of Research at the French National Center for Scientific Research and Professor at the Toulouse School of Economics.

Date Label
Paragraphs

Shorenstein APARC's annual report for the academic year 2022-23 is now available.

Learn about the research, publications, and events produced by the Center and its programs over the last academic year. Read the feature sections, which look at Shorenstein APARC's 40th-anniversary celebration and its conference series examining the shape of Asia in 2030; learn about the research our postdoctoral fellows engaged in; and catch up on the Center's policy work, education initiatives, publications, and policy outreach. Download your copy or read below:

All Publications button
1
Publication Type
Annual Reports
Publication Date
Authors
Subscribe to Information Technology