Weighing AI’s Impact on the Global Order and Security
Weighing AI’s Impact on the Global Order and Security
At a gathering for alumni, the Ford Dorsey Master's in International Policy program hosted four experts to discuss the ramifications of AI on global security, the environment, and political systems.
While the potential benefits of artificial intelligence are significant and far-reaching, AI’s potential dangers to the global order necessitates an astute governance and policy-making approach, panelists said at the Freeman Spogli Institute for International Studies (FSI) on May 23.
An alumni event at the Ford Dorsey Master’s in International Policy (MIP) program featured a panel discussion on “The Impact of AI on the Global Order.” Participants included Anja Manuel, Jared Dunnmon, David Lobell, and Nathaniel Persily. The moderator was Francis Fukuyama, Olivier Nomellini senior fellow at FSI and director of the master’s program.
Manuel, an affiliate at FSI’s Center for International Security and Cooperation and executive director of the Aspen Strategy Group, said that what “artificial intelligence is starting to already do is it creates superpowers in the way it intersects with other technologies.”
An alumna of the MIP program, Manuel noted an experiment a year ago in Switzerland where researchers asked an AI tool to come up with new nerve agents – and it did very rapidly, 40,000 of them. On the subject of strategic nuclear deterrence, AI capabilities may upend existing policy approaches. Though about 30 countries have voluntarily signed up to follow governance standards in how AI would be used in military conflicts, the future is unclear.
“I worry a lot,” said Manuel, noting that AI-controlled fighter jets will likely be more effective than human-piloted craft. “There is a huge incentive to escalate and to let the AI do more and more and more of the fighting, and I think the U.S. government is thinking it through very carefully.”
Geopolitical Competition
Dunnmon, a CISAC affiliate and senior advisor to the director of the Defense Innovation Unit, spoke about the “holistic geopolitical competition” among world powers in the AI realm as these systems offer “unprecedented speed and unprecedented scale.”
“Within that security lens, there’s actually competition across the entirety of the technical AI stack,” he said.
Dunnmon said an underlying security question involves whether a given AI software is running on top of libraries that are sourced from Western companies then if software is being built on top of an underlying library stack owned by state enterprises. “That’s a different world.”
He said that “countries are competing for data, and it’s becoming a battlefield of geopolitical competition.”
Societal, Environmental Implications
Lobell, a senior fellow at FSI and the director of the Center for Food Security and the Environment, said his biggest concern is about how AI might change the functioning of societies as well as possible bioterrorism.
“Any environment issue is basically a collective action problem, and you need well-functioning societies with good governance and political institutions, and if that crumbles, I don’t think we have much hope.”
On the positive aspects of AI, he said the combination of AI and synthetic biology and gene editing are starting to produce much faster production cycles of agricultural products, new breeds of animals, and novel foods. One company found how to make a good substitute for milk if pineapple, cabbage and other ingredients are used.
Lobell said that AI can understand which ships are actually illegally capturing seafood, and then they can trace that back to where they eventually offload such cargo. In addition, AI can help create deforestation-free supply chains, and AI mounted on farm tractors can help reduce 90% of the chemicals being used that pose environmental risks.
“There’s clear tangible progress being made with these technologies in the realm of the environment, and we can continue to build on that,” he added.
AI and Democracy
Persily, a senior fellow and co-director of FSI’s Cyber Policy Center, said, “AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.”
He noted, “AI is not social media,” even though it can interact with social media. Persily said AI is so much more pervasive and significant than a given platform such as Facebook. Problems arise in the areas of privacy, antitrust, bias and disinformation, but AI issues are “characteristically different” than social media.
“One of the ways that AI is different than social media is the fact that they are open-source tools. We need to think about this in a little bit of a different way, which is that it is not just a few companies that can be regulated on closed systems,” Persily said.
As a result, AI tools are available to all of us, he said. “There is the possibility that some of the benefits of AI could be realized more globally,” but there are also risks. For example, in the year and a half since OpenAI released ChatGPT, which is open sourced, child pornography has multiplied on the Internet.
“The democratization of AI will lead to fundamental challenges to establish legacy infrastructure for the governance of the propagation of content,” Persily said.
Balance of AI Power
Fukuyama pointed out that an AI lab at Stanford could not afford leading-edge technology, yet countries such as the U.S. and China have deeper resources to fund AI endeavors.
“This is something obviously that people are worried about,” he said, “whether these two countries are going to dominate the AI race and the AI world and disadvantage everybody.”
Manuel said that most of AI is now operating with voluntary governance – “patchwork” – and that dangerous things involving AI can be done now. “In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.”
Lobell said that while it might seem universities can’t stay up to speed with industry, people have shown they can reproduce those models’ performances just days after their releases.
On regulation — the European Union is currently weighing legislation — Persily said it would be difficult to enforce regulations and interpret risk assessments, so what is needed is a “transparency regime” and an infrastructure so civil entities have a clear view on what models are being released – yet this will be complex.
“I don’t think we even really understand what a sophisticated, full-on AI audit of these systems would look like,” he said.
Dunnmon suggested that an AI governance entity could be created that’s similar to how the U.S. Food and Drug Agency reviews pharmaceuticals before release.
In terms of AI and military conflicts, he spoke about the need for AI and humans to understand the rewards and risks involved, and in the case of the latter, how the risk compares to the “next best option.”
“How do you communicate that risk, how do you assess that risk, and how do you make sure the right person with the right equities and the right understanding of those risks is making that risk trade-off decision?” he asked.
The Ford Dorsey Master’s in International Policy program was established in 1982 to provide students with the knowledge and skills necessary to analyze and address complex global challenges in a rapidly changing world, and to prepare the next generation of leaders for public and private sector careers in international policymaking and implementation.