Cybersecurity
Authors
Charles Mok
Kenny Huang
Kenny Huang
News Type
News
Date
Paragraphs

In new work, Global Digital Policy Incubator (GDPi) Research Scholar, Charles Mok, along with Kenny Huang, a leader in Asia’s internet communities, examine Taiwan’s reliance on fragile external systems and how that reliance exposes Taiwan to threats like geopolitical conflicts, cyberattacks and natural disasters. The key, write Mok and Huang, is strengthening governance, enhancing investment, and fostering international cooperation in order to secure a resilient future.

For more, read the full paper, out now and free to download.

Read More

collage of images at the cyber policy center, people testifying, people at events
News

Agenda for the Trust & Safety Research Conference is now Live!

Speaker line-up for the third annual Trust & Safety Research Conference announced.
Agenda for the Trust & Safety Research Conference is now Live!
Skyline of Taipei at dawn.
Blogs

Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency

Through the Policy Change Studio, students partner with international organizations to propose policy-driven solutions to new digital challenges.
Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency
All News button
1
Subtitle

A new paper from Charles Mok of GDPi examines the current landscape of Taiwan’s Internet Infrastructure

Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

The agenda for the 2024 Trust & Safety Research Conference is now available. The conference includes two packed days of lightning talks, research presentations, panels, workshops and a poster session. The conference has an amazing lineup of speakers, including keynote speakers Camille François (Associate Professor of the Practice of International and Public Affairs, Columbia University) and Arvind Narayanan (Professor of Computer Science and Director of the Center for Information Technology Policy, Princeton University.)

The Trust & Safety Research Conference convenes a diverse group of academics, researchers, and practitioners from fields including computer science, sociology, law, and political science. It features networking opportunities including happy hours, and complimentary breakfast and lunch are provided on both days.

Register now and save a spot before early bird pricing ends on August 1.

More details on the conference website

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Speaker line-up for the third annual Trust & Safety Research Conference announced.

Date Label
Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

Registration is now open for the third annual Trust & Safety Research Conference at Stanford University from September 26-27, 2024. Join us for two days of cross-professional presentations and conversations designed to push forward research on trust and safety.

Hosted at Stanford University’s Frances. C. Arrillaga Alumni Center, the Trust & Safety Research Conference convenes participants working on trust and safety issues across academia, industry, civil society, and government. The event brings together a cross-disciplinary group of academics and researchers in fields including computer science, sociology, law, and political science to connect with practitioners and policymakers on challenges and new ideas for studying and addressing online trust and safety issues.

Your ticket provides access to:

  • Two days of talks, panels, workshops and breakouts
  • Breakfast and lunch both days of the conference
  • Networking opportunities, including happy hours and poster sessions

Early bird tickets are $150 for attendees from academia, civil society and government, and $600 for attendees from industry. Ticket prices go up August 1, 2024.

CONFERENCE WEBSITE • REGISTER

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Tickets on sale for the third annual Trust & Safety Research Conference to be held September 26-27, 2024. Lock in early bird prices by registering before August 1.

Authors
Clifton B. Parker
News Type
News
Date
Paragraphs

While the potential benefits of artificial intelligence are significant and far-reaching, AI’s potential dangers to the global order necessitates an astute governance and policy-making approach, panelists said at the Freeman Spogli Institute for International Studies (FSI) on May 23.

An alumni event at the Ford Dorsey Master’s in International Policy (MIP) program featured a panel discussion on “The Impact of AI on the Global Order.” Participants included Anja Manuel, Jared Dunnmon, David Lobell, and Nathaniel Persily. The moderator was Francis Fukuyama, Olivier Nomellini senior fellow at FSI and director of the master’s program.

Manuel, an affiliate at FSI’s Center for International Security and Cooperation and executive director of the Aspen Strategy Group, said that what “artificial intelligence is starting to already do is it creates superpowers in the way it intersects with other technologies.”

An alumna of the MIP program, Manuel noted an experiment a year ago in Switzerland where researchers asked an AI tool to come up with new nerve agents – and it did very rapidly, 40,000 of them. On the subject of strategic nuclear deterrence, AI capabilities may upend existing policy approaches. Though about 30 countries have voluntarily signed up to follow governance standards in how AI would be used in military conflicts, the future is unclear.

“I worry a lot,” said Manuel, noting that AI-controlled fighter jets will likely be more effective than human-piloted craft. “There is a huge incentive to escalate and to let the AI do more and more and more of the fighting, and I think the U.S. government is thinking it through very carefully.”
 


AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.
Nathaniel Persily
Co-director of the Cyber Policy Center


Geopolitical Competition


Dunnmon, a CISAC affiliate and senior advisor to the director of the Defense Innovation Unit, spoke about the “holistic geopolitical competition” among world powers in the AI realm as these systems offer “unprecedented speed and unprecedented scale.”

“Within that security lens, there’s actually competition across the entirety of the technical AI stack,” he said.

Dunnmon said an underlying security question involves whether a given AI software is running on top of libraries that are sourced from Western companies then if software is being built on top of an underlying library stack owned by state enterprises. “That’s a different world.”

He said that “countries are competing for data, and it’s becoming a battlefield of geopolitical competition.”

Societal, Environmental Implications


Lobell, a senior fellow at FSI and the director of the Center for Food Security and the Environment, said his biggest concern is about how AI might change the functioning of societies as well as possible bioterrorism.

“Any environment issue is basically a collective action problem, and you need well-functioning societies with good governance and political institutions, and if that crumbles, I don’t think we have much hope.”

On the positive aspects of AI, he said the combination of AI and synthetic biology and gene editing are starting to produce much faster production cycles of agricultural products, new breeds of animals, and novel foods. One company found how to make a good substitute for milk if pineapple, cabbage and other ingredients are used.

Lobell said that AI can understand which ships are actually illegally capturing seafood, and then they can trace that back to where they eventually offload such cargo. In addition, AI can help create deforestation-free supply chains, and AI mounted on farm tractors can help reduce 90% of the chemicals being used that pose environmental risks.

“There’s clear tangible progress being made with these technologies in the realm of the environment, and we can continue to build on that,” he added.
 


Countries are competing for data, and it’s becoming a battlefield of geopolitical competition.
Jared Dunnmon
Affiiate at the Center for International Security and Cooperation (CISAC)


AI and Democracy


Persily, a senior fellow and co-director of FSI’s Cyber Policy Center, said, “AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.”

He noted, “AI is not social media,” even though it can interact with social media. Persily said AI is so much more pervasive and significant than a given platform such as Facebook. Problems arise in the areas of privacy, antitrust, bias and disinformation, but AI issues are “characteristically different” than social media.

“One of the ways that AI is different than social media is the fact that they are open-source tools. We need to think about this in a little bit of a different way, which is that it is not just a few companies that can be regulated on closed systems,” Persily said.

As a result, AI tools are available to all of us, he said. “There is the possibility that some of the benefits of AI could be realized more globally,” but there are also risks. For example, in the year and a half since OpenAI released ChatGPT, which is open sourced, child pornography has multiplied on the Internet.

“The democratization of AI will lead to fundamental challenges to establish legacy infrastructure for the governance of the propagation of content,” Persily said.

Balance of AI Power


Fukuyama pointed out that an AI lab at Stanford could not afford leading-edge technology, yet countries such as the U.S. and China have deeper resources to fund AI endeavors.

“This is something obviously that people are worried about,” he said, “whether these two countries are going to dominate the AI race and the AI world and disadvantage everybody.”

Manuel said that most of AI is now operating with voluntary governance – “patchwork” – and that dangerous things involving AI can be done now. “In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.” 

Lobell said that while it might seem universities can’t stay up to speed with industry, people have shown they can reproduce those models’ performances just days after their releases.
 


In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.
Anja Manuel
Affiiate at the Center for International Security and Cooperation (CISAC)


On regulation — the European Union is currently weighing legislation — Persily said it would be difficult to enforce regulations and interpret risk assessments, so what is needed is a “transparency regime” and an infrastructure so civil entities have a clear view on what models are being released – yet this will be complex.

“I don’t think we even really understand what a sophisticated, full-on AI audit of these systems would look like,” he said.

Dunnmon suggested that an AI governance entity could be created that’s similar to how the U.S. Food and Drug Agency reviews pharmaceuticals before release.

In terms of AI and military conflicts, he spoke about the need for AI and humans to understand the rewards and risks involved, and in the case of the latter, how the risk compares to the “next best option.”

“How do you communicate that risk, how do you assess that risk, and how do you make sure the right person with the right equities and the right understanding of those risks is making that risk trade-off decision?” he asked.



The Ford Dorsey Master’s in International Policy program was established in 1982 to provide students with the knowledge and skills necessary to analyze and address complex global challenges in a rapidly changing world, and to prepare the next generation of leaders for public and private sector careers in international policymaking and implementation.

Read More

A seven picture collage of travel photos taken by the Ford Dorsey Master's in International Policy Class of 2024 during their spring internships through the Policy Change Studio.
Blogs

Around the World in Seven Days: MIP Students Travel the Globe to Practice Policymaking

Each spring, second year students in the Ford Dorsey Master's in International Policy spread out across the globe to work on projects affecting communities from Sierra Leone to Mongolia, New Zealand, and beyond.
Around the World in Seven Days: MIP Students Travel the Globe to Practice Policymaking
AI
News

Research can help to tackle AI-generated disinformation

New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.
Research can help to tackle AI-generated disinformation
The Right Honorable Jacinda Ardern and a delegation from the Christchirch Call joined Stanford researchers at the Freeman Spogli Institute for International Studies for a roundtable discussion on technology governance and regulation.
News

Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation

Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern, a delegation from the Christchurch Call joined Stanford scholars to discuss how to address the challenges posed by emerging technologies.
Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation
All News button
1
Subtitle

At a gathering for alumni, the Ford Dorsey Master's in International Policy program hosted four experts to discuss the ramifications of AI on global security, the environment, and political systems.

Terms
Date Label
Authors
News Type
Blogs
Date
Paragraphs

For many, spring break is synonymous with time away on laid back beaches. But for the hardworking students in the Ford Dorsey Master's in International Policy Class of 2024, the break from their normal classes was the perfect opportunity to meet with partners all over the world and conduct field research for their capstone projects.

Each year, second year master's students participate in a two quarter course called the Policy Change Studio. Built on the idea that hands-on experience navigating the realities of bureaucracy, resource constraints, and politics is just as important for students as book learning and theory, this capstone course pairs groups of students with governments, NGOs, and research institutes around the world to practice crafting policy solutions that help local communities.

From agricultural policy in Mongolia to public transportation in Ghana, cyber resilience in Taiwan and AI governance in Brazil, keep reading to see how our students have been making an impact!

 

Brazil

Poramin Insom, Justin Yates, Thay Graciano, and Rosie Lebel traveled to Rio de Janeiro to work with the Institute for Technology and Society to investigate ways to design a governance strategy for digital and AI tools in public defenders' offices.

Artificial Intelligence promises to transform Public Defenders in Brazil, as seen throughout our fieldwork trip in Rio de Janeiro. Our team spent the week discussing the integration of AI in legal practices with defenders from 13 states and experts from Instituto de Tecnologia e Sociedade (ITS Rio) and COPPE / UFRJ. We focused on developing AI tools tailored to reduce administrative burdens, enabling defenders to concentrate on advocacy. With nearly 80% of Brazilians entitled to free legal aid, AI can automate routine tasks like document categorization and grammatical corrections.

Significant challenges relate to privacy and potential biases in algorithms, underscoring the need for collaborative governance to ethically implement these solutions. Thus, a unified technological strategy is crucial. We hope that through our work, we can create a collaborative governance framework that will facilitate the development of digital and AI tools, ultimately helping citizens at large. We appreciated the opportunity to learn from incredibly dedicated professionals who are excited to find new ways to jointly develop tools.

 

China-Taiwan

Sara Shah, Elliot Stewart, Nickson Quak, and Gaute Friis traveled to Taiwan to gain a firsthand perspective on China’s foreign information manipulation and influence (FIMI), with a specific focus on the role that commercial firms are playing in supporting these campaigns.

We met with government agencies, legislators, military and national security officials, private sector actors, and civil society figures within Taiwan's vibrant ecosystem for countering Foreign Information Manipulation and Interference (FIMI). On the ground, the team found that China’s FIMI operations are evolving and increasingly subtle and complex. As generative AI empowers malign actors, our team assessed that the battle against sophisticated, state-sponsored influence campaigns requires a more integrated and strategic approach that spans legal, technological, and societal responses.

 

Ghana

Skylar Coleman and Maya Rosales traveled to Accra and Cape Coast in Ghana while Rosie Ith traveled to Washington DC and Toronto to better understand the transit ecosystem in Ghana and the financial and governing barriers to executing accessible and reliable transportation.

During their time in Ghana, Skylar and Maya met with various stakeholders in the Ghanaian transportation field, including government agencies, ride-share apps, freight businesses, academics, and paratransit operators. Presently, paratransit operators, known locally as "tro tros," dominate the public transportation space and with a variety of meetings with their union officials and drivers in terminals around Accra they were able to learn about the nature of the tro tro business and their relationships — and lack thereof — with the government.

In D.C., Rosie met with development organizations and transport officials and attended the World Bank’s Transforming Transportation Conference and their paratransit and finance roundtable. Collectively, they learned about the issues facing the transport industry primarily related to problems surrounding bankability, infrastructure and vehicle financing, and lack of government collaboration with stakeholders. Insights from the trip spurred their team away from conventional physical interventions and toward solutions that will bridge stakeholder gaps and improve transport governance and policy implementation.

 

Mongolia

Ashwini Thakare, Kelsey Freeman, Olivia Hampsher-Monk, and Sarah Brakebill-Hacke traveled to Mongolia and Washington D.C. to better understand grassland degradation, the role that livestock overgrazing plays in exacerbating the problem, and what is currently being done to address it.

Our team had the opportunity to go to Mongolia and Washington DC where we conducted over twenty structured interviews with a variety of stakeholders. We spoke with people including local and central government officials, officials of international organizations, representatives from mining and cashmere industries, community organizations, academic researchers, herder households, NGOs and Mongolian politicians. Though we knew the practice of nomadic herding is core to Mongolia’s national identity, we didn’t fully realize just how integrated this practice, and the problem of grassland degradation, are in the economy, society and politics of Mongolia.

In the run-up to Mongolia’s election in June, this issue was especially top of mind to those we interviewed. Everyone we spoke with had some form of direct connection with herding, mostly through their own families. Our interviews, as well as being in Ulaanbaatar and the surrounding provinces, helped us to deepen our understanding of the context in which possible interventions operate. Most especially we observed all the extensive work that is being done to tackle grassland degradation and that institutionalizing and supporting these existing approaches could help tackle this issue.

 

New Zealand

Andrea Purwandaya, Chase Lee, Raul Ruiz, and Sebastian Ogando traveled to Auckland and Wellington in New Zealand to support Netsafe’s efforts in combating online harms among 18- to 30-year-olds of Chinese descent. This partnership aims to enhance online safety messages to build safer online environments for everyone.

While on the ground, our team met with members from Chinese student organizations and professional associations to gather primary evidence on the online harms they face. We also met with Tom Udall, the U.S. Ambassador to New Zealand, his team, and university faculty to brainstorm solutions to tackle this problem. We learned about the prevalent use of “super-apps” beyond WeChat in crowdsourcing solutions and support, and were able to better grasp the complexities of the relationships between public safety organizations and the focus demographic. In retrospect, it was insightful to hear from actors across the public, private, and civic sectors about the prevalence of online harms and how invested major stakeholders are in finding common solutions through a joint, holistic approach.

 

Sierra Leone

Felipe Galvis-Delgado, Ibilola Owoyele, Javier Cantu, and Pamella Ahairwe traveled to Freetown, Sierra Leone to analyze headwinds affecting the country's solar mini grid industry as well as potential avenues to bolster the industry's current business models.

Our team met with private sector mini grid developers, government officials from the public utilities commission and energy ministry, and rural communities benefiting from mini grid electrification. While we saw first-hand the significant impact that solar mini grids can have on communities living in energy poverty, we also developed a deeper understanding of the macroeconomic, market, and policy conditions preventing the industry from reaching its full potential of providing energy access to millions of Sierra Leoneans. Moving forward, we will explore innovative climate finance solutions and leverage our policy experience to develop feasible recommendations specific to the local environment.

 

Taiwan

Dwight Knightly, Hamzah Daud, Francesca Verville, and Tabatha Anderson traveled to Taipei, Keelung, and Hsinchu, Taiwan to explore the island democracy’s current posture and future preparedness regarding the security of its critical communications infrastructure—with a special focus on its undersea fiber-optic cables.

During our travels around Taiwan and our many meetings, we were surprised with the lack of consensus among local decision-makers regarding which potential solution pathways were likely to yield the most timely and effective results. These discrepancies often reflected the presence of information asymmetries and divergent institutional interests across stakeholders—both of which run counter to Taiwan’s most urgent strategic priorities. Revising existing bureaucratic authorities and facilitating the spread of technical expertise would enable—and enrich—investment in future resilience.

While we anticipated that structural inefficiencies would impede change to some degree, our onsite interviews gave us a clearer picture of where policy interventions will likely have the most positive effect for Taiwan's defense. With the insights from our fieldwork, we intend to spend the remainder of the quarter exploring new leads, delving into theory of change, and designing a set of meaningful policy recommendations.

 

The Ford Dorsey Master's in International Policy

Want to learn more? MIP holds admission events throughout the year, including graduate fairs and webinars, where you can meet our staff and ask questions about the program.

Read More

Students from the Ford Dorsey Master's in International Policy Class of 2024 brainstorm ideas for their capstone projects in the Policy Impact Studio at the Freeman Spogli Institute for International Studies at Stanford University.
News

Hands on Policy Practice: A Look at the MIP Capstone Projects of 2024

Taiwan, New Zealand, and Sierra Leone are just a few of the places students from the Ford Dorsey Master's in International Policy are headed this year for their capstone projects.
Hands on Policy Practice: A Look at the MIP Capstone Projects of 2024
The Ford Dorsey Master's in International Policy Class of 2024 at the Freeman Spogli Institute for International Studies.
Blogs

Meet the Ford Dorsey Master's in International Policy Class of 2024

The 2024 class of the Ford Dorsey Master’s in International Policy has arrived at Stanford eager to learn from our scholars and tackle policy challenges ranging from food security to cryptocurrency privacy.
Meet the Ford Dorsey Master's in International Policy Class of 2024
A photo collage of the 2023 cohort of the Ford Dorsey Master's in International Policy on their Policy Change Studio internships.
Blogs

Master's Students Tackle Policy Projects Around the Globe

From Egypt to England, the Maldives to Switzerland, Vietnam, Ghana, Kenya, and Fiji, the 2023 cohort of the Ford Dorsey Master's in International Policy has criss-crossed the world practicing their policymaking skills.
Master's Students Tackle Policy Projects Around the Globe
All News button
1
Subtitle

Each spring, second year students in the Ford Dorsey Master's in International Policy spread out across the globe to work on projects affecting communities from Sierra Leone to Mongolia, New Zealand, and beyond.

Paragraphs

Our increasingly internet-connected world has yielded exponential demand for cybersecurity. However, protecting cyber infrastructure is technically complex, constantly changing, and expensive. Small organizations or corporations with legacy systems may struggle to implement best practices. To increase cybersecurity for organizations in Russia, we propose fostering a culture of ethical hacking by supporting bug bounty programs. To date, bug bounties have not had the same level of success or investment in Russia as in the United States; yet, we argue that bug bounty programs, when properly established, institutionalize a culture of ethical hacking by establishing trust between talented hackers and host organizations. This paper will first define ethical hacking and bug bounty programs. It will explore the current bug bounty landscape in Russia and the United States. Based on issues identified, we will proceed to offer a set of best practices for establishing a successful bug bounty program. Finally, we will discuss some considerations for setting up bug bounty programs in Russia.

All Publications button
1
Publication Type
Conference Memos
Publication Date
Journal Publisher
The Stanford US-Russia Journal
Authors
Evgeniia Rudenko
Anastasia Gnatenko
Andrew Milich
Kathryn Hedgecock
Zhanna Malekos Smith
Number
No. 1
0
gustavs_zilgalvis.jpg

Gustavs Zilgalvis is a Ford Dorsey Master’s in International Policy candidate at Stanford’s Freeman Spogli Institute for International Studies, a National Security Innovation Scholar at Stanford’s Gordian Knot Center for National Security Innovation, a founding Director at the Center for Space Governance, and a Summer Associate at Lux Capital. At Stanford, he is specializing in Cyber Policy and Security and is interested in the geopolitical and economic implications of the development of artificial intelligence and the space domain. 

Previously, Gustavs has consulted on Policy Development & Strategy at Google DeepMind, held a Summer Research Fellowship at Oxford’s Future of Humanity Institute, and his research in computational high-energy physics has appeared in SciPost Physics and SciPost Physics Core. Gustavs holds a Bachelor of Science with First-Class Honours in Theoretical Physics from University College London, and graduated first in his class from the European School Brussels II. Gustavs is an enthusiastic golfer who has two national championships, and enjoys skiing, surfing, cycling, swimming, and listening to music in his spare time.

Master's in International Policy Class of 2025
Date Label
0
tiffany_saade.jpg

Tiffany Saade is a coterminal master's candidate in the Stanford Ford Dorsey Masters in International Policy specializing in Cyber Policy and Security. She is also completing the final year of her undergraduate degree at Stanford in political science and international relations, focusing on geopolitical risk, with a regional expertise in East Asia and the Middle East. In her masters, Tiffany focuses on digital transformation, AI policy and data privacy.

Previously, Tiffany has worked for Ambassador David Hale as a political intern, at the US Institute for Peace, and the Carnegie Endowment for International Peace, focusing on conflict resolution, security and state-building in the Middle East. Most recently, Tiffany focused on geotechnology, AI regulation and transatlantic cooperation on cybersecurity during her time at the European Council on Foreign Relations in the London, Berlin and Madrid offices.

She has been a World Economic Forum Global Shaper for the Palo Alto Hub since March 2022, and Vice Curator since July 2023, steering social impact and innovation toward four issues she is most passionate about: Artificial Intelligence and its applications in education, policymaking, and economic empowerment. Currently, Tiffany is a research assistant at Stanford HAI for Jennifer King, working at the intersection of data privacy, manipulative design, genetic privacy, IoT, and digital surveillance. She recently joined the Trusted Election Analysis and Monitoring (TEAM) working group at the Harvard Belfer Center for Science and International Affairs led by Senior Fellow Honorable Ellen McCarthy, researching the problem of malign election information, its threats to political processes, and the role of AI-powered near real-time data dashboard and chat interface in providing the public with accurate information to preserve electoral integrity and institutional fairness.  She is also completing her individual research project on digital surveillance and nation-branding in East Asia and MENA, advised and supervised by Andrew Grotto.

Tiffany’s interests range from geopolitical risk and peacebuilding, to the intersection of AI and defense, to the ways in which policymaking could enhance data privacy especially in an era riddled by disinformation, cyberattacks and zero-sum power struggles.  In her first year at MIP, Tiffany hopes to continue her research on digital surveillance and disinformation, delve deeper into the combination of AI governance and regulation, and learn more about how open source large language models can pose a national security risk in the context of rising tensions in the South China Sea and of autonomous systems in warfare. She is from Beirut Lebanon and speaks French, English, and Arabic, and is currently learning Mandarin.

Master's in International Policy Class of 2025
0
sae_kobayashi_oct._2023_01_3_-_sae_kobayashi.jpg

Sae is a Master’s International Policy Class of ’25 at Stanford. With a profound interest in economic security and trade policies, She recently earned her LL.M. from Georgetown University Law Center, specializing in international trade law and conducting extensive research on investment screening as her final project. Sae aims to broaden her knowledge of cyber and tech policy at Stanford.

Prior to her Master’s program, Sae served for the Ministry of Foreign Affairs of Japan she was in charge of international trade affairs such as the World Trade Organization (WTO), G20 and G7. Sae previously interned with the WTO, writing a trade policy report on China and the Economic Section at the Embassy of Japan in the US as well as several think tanks to delve deeper into her subject of expertise.

Sae holds a B.A. from Keio University, Faculty of Law in Japan, and completed a year-long exchange program focusing on political science and anthropology at the University of California, Berkeley.

Master's in International Policy Class of 2025
0
belfer_headshot_edited_copy_-_kevin_klyman.jpeg

Kevin Klyman is a technology policy strategist focused on artificial intelligence, U.S.-China competition, and regulating emerging technologies. In addition to being an MIP candidate at Stanford, he is a Technology Policy Researcher at Harvard’s Avoiding Great Power War Project, an Emerging Expert at the Forum on the Arms Trade, and a prospective JD candidate at Harvard Law School.

Klyman’s writing on the technology and geopolitics has been published in Foreign Policy, TechCrunch, Just Security, The American Prospect, The Diplomat, Inkstick, The National Interest, and South China Morning Post. He is the author of “The Great Tech Rivalry: China vs. the U.S.” with Professor Graham Allison, which has been cited by The Wall Street Journal, The Economist, and NPR among others.

Klyman’s research primarily addresses responsible development and use of large AI models in the United States, Europe, and China. He also conducts research related to compute governance, quantum computing export controls, telecommunications infrastructure deployment, clean energy supply chains, biotechnology supply chains, digital trade agreements, digital technology regulators, and digital development institutions.

Klyman has led tech policy initiatives for a variety of the world’s leading international organizations. As an Artificial Intelligence and Digital Rights Fellow at United Nations Global Pulse, the AI lab of the UN Secretary-General, he headed the organization’s work on national AI strategies and coordinated the UN’s Privacy Policy Group. Klyman helped lead the development of a risks, harms, and benefits assessment for algorithmic systems that is now used across the UN. His other projects included working with engineers to address risks posed by the UN’s machine learning-based tools, organizing international consultations on data governance frameworks, and drafting data sharing agreements between the UN and the private sector. After the onset of the pandemic, Klyman coauthored a new privacy policy in partnership with the World Health Organization—the “Joint Statement on Data Protection and Privacy in the COVID-19 Response”—which was adopted by the UN as a whole.

As a Policy Fellow at the UN Foundation’s Digital Impact Alliance, Klyman built a database that is now used by the World Bank and the UN Development Programme to assess countries' readiness for digital investment. He also worked with the German and Estonian governments to spin up the GovStack initiative in order to assist governments in providing digital services. At the Campaign to Stop Killer Robots, Klyman directed research on countries’ policies regarding autonomous weapons, resulting in the landmark report “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control.”

Klyman has also contributed to a number of policy arenas aside from technology. At Human Rights Watch, he helped expose war crimes in Syria and Yemen through open-source intelligence gathering and coauthored a report about the illegal use of cluster munitions. As a Legislative Assistant to the Mayor of Berkeley, California, he drafted a dozen pieces of legislation that nearly doubled the city’s investments in affordable housing. Additionally, as a Legislative Assistant to an elected commissioner on the Berkeley Rent Stabilization Board, he authored enabling legislation that paved the way for Berkeley to become one of the first and only cities in the country to ban housing discrimination against formerly incarcerated tenants.

Klyman attended UC Berkeley as an undergraduate, graduating with highest honors in political science along with a degree in applied mathematics concentrating in computer science. He is an award-winning debater who achieved the highest ranking in Berkeley’s history in American parliamentary debate and was Co-President of Berkeley’s parliamentary debate team; he has also coached multiple national debate champions. His thesis on Chinese foreign policy won the Owen D. Young Prize as the top paper in international relations and he received the John Gardner Public Service Fellowship as one of Berkeley’s top three public service-oriented graduates. He serves as Co-President of the John Gardner Fellowship Association, a 501(c)3.

Master's in International Policy Class of 2025
Subscribe to Cybersecurity