Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation

Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation

Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern, a delegation from the Christchurch Call joined Stanford scholars to discuss how to address the challenges posed by emerging technologies.
The Right Honorable Jacinda Ardern and a delegation from the Christchirch Call joined Stanford researchers at the Freeman Spogli Institute for International Studies for a roundtable discussion on technology governance and regulation. The Right Honorable Dame Jacinda Ardern and a delegation from the Christchurch Call joined Stanford researchers at the Freeman Spogli Institute for International Studies for a roundtable discussion on technology governance and regulation.

Stanford researchers from the Cyber Policy CenterStanford Internet ObservatoryStanford Institute for Human-Centered AI (HAI)Stanford Law School, and the McCoy Family Center for Ethics in Society welcomed a delegation from New Zealand for a roundtable discussion on technology governance and regulation in Encina Hall on Friday, June 9, 2023.

The delegation, led by Jacinda Ardern, the Prime Minister's Special Envoy for the Christchurch Call, was joined by Frédéric Jung, Consul General of France in San Francisco; Henri Verdier, Digital Ambassador of France; Maxime Benallaoua, a policy analyst on digital affairs to the French government, and Anne Marie Engtoft Larsen, the Tech Ambassador of Denmark.

The Christchurch Call is a community of over 120 governments, online service providers, and civil society organizations with a shared goal of eliminating terrorist and violent extremist content online. The initiative grew out of the response to the terrorist attack which took place in New Zealand on March 15, 2019, when a gunman killed 51 people and injured 50 more at two mosques in the city of Christchurch. The perpetrator livestreamed his attack for 17 minutes, generating thousands of views, before it was shut down. 

Following this horrific event, then-Prime Minister Jacinda Ardern, along with French President Emmanuel Macron and other heads of state and industry leaders, developed and put forth the Christchurch Call, a request to governments and online service providers to create online spaces that are free and secure, but which also uphold the values of democracy and protect citizens.

Speaking at the roundtable held at the Freeman Spogli Insitute for International Studies, Paul Ash, the Prime Minister's Special Representative on Cyber and Digital and Christchurch Call Coordinator, acknowledged the challenges inherent in building and maintaining this kind of broad, multi-nation, multi-stakeholder, public-private initiative.

“There’s a new form of diplomacy required around this,” he told the Stanford researchers. “It requires each of us to meet each other somewhere in the Venn diagram where our interests overlap. And that’s not comfortable at all, but it’s critically important.”

Addressing the group, Ardern, who has been serving as a Special Envoy for the Christchurch Call after stepping down as New Zealand’s Prime Minister in January 2023, outlined why the work of the Christchurch Call is still pressing four years since its founding.

“We always knew that the Call would not really have an end point so long as there are new, emerging technologies — be it AI or other immersive technologies — that contribute to radicalization, violent extremism, and terrorism online,” she explained.

With the release of new AI-based tools and products over the last year, the efforts of initiatives like Christchurch Call are critical as governments, companies, and societies try to navigate rapidly evolving technology and the impacts it is having on the world.

As Erik Brynjolfsson, a senior fellow at Stanford HAI, told the delegation, “As we start to measure the economic and productivity effects this technology is going to have in the next decades, it’s going to be staggering. But there’s also a lot of room for this to go really wrong.”

Special Envoy Ardern invited the scholars at the discussion to weigh in on two foundational questions: as experts, what worries you most in the space of technology governance and regulation, and what can be done by groups like Christchurch Call to help address those concerns? A selection of their answers is shared below.



Responses have been edited for length and clarity.
 


As researchers, what are some of the major challenges you see right now in technology development, safety, and regulation?

Losing Norms and Accelerating Disruption

Nathaniel Persily, Co-Director of the Stanford Cyber Policy Center

We had been at a point of more-or-less equilibrium with content moderation, but that’s really been blown apart by Twitter in the last year. And the effect of that is not limited to that platform; it’s metastasizing across Silicon Valley to other platforms. Trust and Safety teams are also being hollowed out, often for economic reasons, and all of that has an impact on the kind of content that ends up online. We’re at a point where the tectonic plates are shifting on established areas, and then new technology like AI, blockchain, VR, and AR is coming in and disrupting things even further. The cumulative effect of that is destabilizing.

Legal Fights Over Research

Alex Stamos, Director of the Stanford Internet Observatory

The transparency of platforms is key. We are now past peak Trust and Safety. The pinnacle of protecting things like elections in the United States and Europe was probably within the 2022 timeframe, and that is all now falling apart. Transparency is key to that. It’s not just about the platforms providing technical access to us as researchers; we’ve historically had workarounds that have allowed us to still do research in an appropriate way. But we’re getting to a point where we can’t use our workarounds. Companies are starting to sue academics for doing research, and we’re beginning to see the weaponization of the terms of service of social platforms and intellectual property law to prevent academics from doing their work. That’s a big problem, and that’s going to be the big story in this space through 2023 and 2024. 
 

No Funding, No Results

Daphne Keller, Director of the Program on Platform Regulation

It’s going to be very interesting to watch how the Digital Services Act (DSA) in Europe unfolds and what we learn about what works and what doesn’t. I’m a little worried that other countries will rush to emulate it before seeing how it plays out. One of the interesting aspects of the DSA is that it deliberately set out to create a multi-stakeholder ecosystem. There are built-in roles for researchers and auditors and other parties. That’s great. But my big concern is that most of those so-called “essential roles” are not funded. There’s an expectation that civil society will spring into action to do a bunch of things in terms of oversight, but it’s very unclear if civil society will be able to afford to do that. 
 

The Unknowns of AI

Rob Reich, Director of the Center for Ethics in Society

In the space of AI right now, one of the biggest debates is deciding whether open-source, open-access, generative AI models are a good way forward. To say the obvious, the concern is with what happens when you put powerful tools like this in the hands of adversarial actors. We all agree that open-sourcing access and information about uranium and plutonium is not a good idea. Now there’s a growing tension about whether that same mentality needs to be brought to AI, and whether that is also an existential threat to humanity in some way. 
 

The Optics of Regulations

Renée DiResta, Technical Research Manager at the Stanford Internet Observatory

One of the challenges to regulating these technologies is that the optics of it look terrible for politicians. No one wants to be seen as a politician trying to curtail free speech or give the appearance of trying to sway elections in a certain way that benefits themselves. But by the same token, it needs to be done. And this is the question: how do you pass regulation that protects ordinary people who don’t have the resources to fight against this — whether that’s women suffering from revenge porn or children being exploited through the digital distribution of child sexual abuse materials — when you’re a politician who has power but may be seen as having a direct self-interest in regulating technology that may appear unfavorable to you? It’s not an easy needle to thread. 
 

Who Has a Seat at the Table

Russell Wald, Director of Policy at the Stanford Institute for Human-Centered Artificial Intelligence (HAI)

Who has a seat at the table right now when it comes to these discussions about technology and their place in society? Right now, it’s just a handful of the same industry leaders who stand to benefit from their adoption. They’re the ones with the policymakers. Academia and civil society have a lot to add to these conversations, but they are not at the table. 
 


If you could have regulators do one thing today that would make a difference or impact in the space of tech governance, what would it be?

Put More Chefs in the Kitchen

Nathaniel Persily, Co-Director of the Stanford Cyber Policy Center

One of the big places we need to start with is platform transparency and researcher access. And this is not just a ploy for ensuring employment for researchers and academics like myself. There is simply not enough expertise in any government anywhere in the world to effectively tackle this, and the only way we are going to be able to make a difference in the short time period we have in this inflection point moment we’re at is to deploy the resources and knowledge of civil society and academia to help government.
 

Find Common Rules of Engagement

Alex Stamos, Director of the Stanford Internet Observatory

Democracies need to set a baseline framework for what they require and expect of tech companies and social platforms. There are a lot of groups who are starting to copy the moves Twitter is making to restrict transparency and keep outside eyes — whether they’re regulators, academics, or researchers — from being able to see what’s going on with the data inside these companies. There are equally plenty of people who don’t care about rules and don’t follow them and are violating people’s privacy and selling their data and making money off of it.

For those of us who are part of legitimate institutions that follow compliance and have rules, we have to care. We’re rapidly getting to a situation where the good guys are kept from looking and the bad guys get off with a free pass. Democracy can’t work in that type of environment. So even if they don’t all do it in the same way, countries need to try at some level to establish a status quo and standards and principles that apply across jurisdictions, especially as we are rapidly moving towards really difficult legal scenarios involving things like AI-generated CSAM.
 

Transparency, Transparency, Transparency

Daphne Keller, Director of the Program on Platform Regulation

We have a mix of needing affirmative access rights that legislation like the Platform Accountability and Transparency Act (PATA) would give researchers in the U.S., but also action to get rid of barriers that researchers have to doing their work, such as being allowed to be sued for doing research. I think there’s a lot of productive, low-hanging fruit work that can be done right now to kickstart broader transparency efforts.
 

Build Strength Through Interoperability

Mark Lemley, Director of the Stanford Program in Law, Science and Technology

There’s lots of room to think about how researchers and companies can better interoperate across different digital platforms and how they can move their data and networks from one to the other should one platform, for example, be overrun by hostile actors. Right now, there are legal frameworks standing in the way. But by that same token, some of the questions surrounding regulation and protection will need to come through legal frameworks. The model I keep coming back to is that of cybersecurity, where you combine regulations with technological solutions to create defense and resiliency. So maybe we have AI disinformation that we create rules against, but we also have AI technology that is working to identify and flag the AI disinformation. That’s going to take both a robust technology sector and a smart court system.
 

Center People in Policy

Renée DiResta, Technical Research Manager at the Stanford Internet Observatory

I think interjecting the personal back into the policy can help with some of the traction needed to move the needle on these issues. There are lots of sympathetic cases that occasionally get covered in the media that have a lot of potential to make an impact. A lot of the golden era of content moderation in the United States happened through the work of activists, civil society, and media arguing in favor of something being done. Those kinds of voices and stories can be powerful reminders about what’s at stake and why we need norms and regulations.
 

Cultivate Consciousness of the Issues

Michael McFaul, Director of the Freeman Spogli Institute for International Studies

Right now, these conversations about technology are mostly happening in very small, generally very elite circles. When I look at examples of successful political movements and successful sea changes in history, those all have a very broad, class-conscious band of support. I would wager that most people — at least most Americans — aren’t thinking about these issues in the way, that say, researchers at Stanford, or in Paris, or New Zealand are. If we want to do something big about this issue, I think we need to make sure there is consciousness among people about what these tech companies are doing, how this technology is working, and what the effects of it are. That awareness doesn’t come from nowhere, but we really need it if we’re going to get somewhere on this.
 

Read More

Marietje Schaake discusses the misuse of technology and the rise of digital authoritarianism with Youtube CEO Neal Mohan at the 2023 Summit for Democracy.
Blogs

Policy Impact Spotlight: Marietje Schaake on Taming Underregulated Tech

A transatlantic background and a decade of experience as a lawmaker in the European Parliament has given Marietje Schaake a unique perspective as a researcher investigating the harms technology is causing to democracy and human rights.
cover link Policy Impact Spotlight: Marietje Schaake on Taming Underregulated Tech
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
cover link Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
President Barack Obama at the “Challenges to Democracy in the Digital Information Realm" conference.
News

Barack Obama Addresses the Intersection of Online Disinformation, Regulation and Democracy at Stanford Event

At a conference hosted by the Cyber Policy Center and Obama Foundation, former U.S. President Barack Obama delivered the keynote address about how information is created and consumed, and the threat that disinformation poses to democracy.
cover link Barack Obama Addresses the Intersection of Online Disinformation, Regulation and Democracy at Stanford Event