Deputy Director of the Center for International Security and Strategy (CISS) at Tsinghua University
Non-resident Fellow, CISS, Tsinghua University
Visitors take pictures of robot arms at the 2023 World Robot Conference in Beijing.
The rapid advancement of artificial intelligence technology and its applications have drawn significant attention globally. In 2023, the release of OpenAI’s powerful ChatGPT 4.0 sparked intense competition between enterprises and laboratories to develop large language models. This year, OpenAI unveiled Sora, a tool for creating realistic short videos, demonstrating AI’s immense potential.
Despite the exciting up-side, however, AI’s potential risks present critical challenges that must be addressed. As major AI powers, China and the United States have initiated communication regarding its governance. In November, at the Filoli House and Garden estate in Woodside, California, Presidents Xi Jinping and Joe Biden agreed to establish a U.S.-China intergovernmental dialogue mechanism on AI. In January it was agreed that the first dialogue would be held this spring.
Given the context great power competition and cooperation, building consensus will likely present challenges as a result of AI’s dual-use nature and the low level of trust between the two countries. Having policy dialogues and reaching consensus in the broader realm of social governance may be more feasible than in the sensitive domain of military security. Nevertheless, preventing the militarization of AI applications from impacting strategic stability remains extremely important.
Since 2018, Tsinghua University’s Center for International Security and Strategy has explored the impact of AI on national security and international relations, as well as the possibility of establishing common norms. In 2019, CISS and the Brookings Institution launched a joint research project on artificial intelligence and international security.
From 2019 to 2024, academic discussions between the two sides progressed haltingly because of escalating strategic differences and declining mutual trust. But scholars generally believed that the governance of AI-enabled weapons, particularly autonomous weapons, was paramount. The project’s progress reflected the difficulty of achieving cutting-edge academic cooperation amid great power rivalry, but its progress was encouraging.
The initial focus was on establishing a degree of consensus and rules regarding AI-enabled weapons. Experts explored issues such as target identification, compliance with international law, training data and system security. They recognized each other’s perspectives on the new challenges posed by AI military applications and explored measures to reduce risks at different stages.
Despite theoretical and practical difficulties, the two teams carried out exploratory work on establishing confidence-building measures from the perspective of strategic stability. Experts acknowledged that factors such as declining mutual trust, intensifying military confrontation and fierce technological competition increased the uncertainty of establishing confidence-building measures.
To overcome the shortage of AI military application cases, the two sides designed various discussion scenarios, including the use of lethal and non-lethal autonomous weapons, the deployment of autonomous and intelligent technologies in space and the deep ocean, as well as the use of large models to support battlefield decision-making. Notably, experts from both sides emphasized the important role of humans in crisis management.
To resolve divergent understandings of terminology, the two sides established a terminology working group and engaged in exchanges that resulted in consensus on 25 terms. However, differences remained in distinguishing between autonomy and intelligence, reflecting the complexity of AI technology and the difficulties involved in implementing relevant arms control measures.
The joint research project was regarded as a form of Track II diplomacy, involving unofficial and informal contacts and interactions among intellectual elites, particularly policy research experts. The project was initiated by Chinese Ambassador Fu Ying and retired U.S. Admiral John Allen, with scholars and experts from various fields.
Organizers arranged dialogue venues, topics, formats and participants through friendly consultations. Even during the lowest point of Sino-U.S. relations the project teams maintained a rhythm of two dialogues per year, selecting third countries as venues for in-person meetings. The two think tanks invited official observers, submitted summary reports to government departments and reported project progress, thereby maintaining interaction and communication with their respective governments.
During the dialogue process, CISS gradually established a stable interdisciplinary research team and formed an academic community in the field of AI military applications. Additionally, the team initiated dialogues and exchanges with think tanks from Europe and Southeast Asia.
Generally, when official diplomatic communication is not smooth, non-governmental dialogues can maintain communication channels and provide information and intellectual support for policymaking. As China and the United States prepare for an intergovernmental dialogue on AI, such Track II dialogues play a crucial role in supporting diplomatic communication.
Track II dialogues can conduct research on long-term issues of interest to the intergovernmental dialogue, such as clarifying key terminology, exploring legal and policy foundations for governance cooperation and identifying consensus norms and rules for maintaining strategic stability. Regarding the upcoming dialogue, the broader topic of AI governance may be more readily included on the agenda than the sensitive issue of military AI. Given the U.S. presidential election and the uncertainties surrounding America’s China strategy, the likelihood of communication specifically on AI military issues may be low, but Track II dialogues in the military domain remain necessary.
Additionally, Track II dialogues can undertake research on AI risk identification, such as assessing the global security risks posed by large AI models’ practical applications and analyzing potential areas where AI militarization could increase strategic instability. One direction to explore is a framework for classifying levels of autonomy and intelligence in weapons systems and categorize corresponding weapons systems based on open-source intelligence.
Dialogue and cooperation between China and the United States is paramount, as they are the countries with the most rapid development of AI technology and applications. Such interactions can help mitigate potential risks that may affect bilateral relations and international security, as well as contribute insights useful in the establishment of the international norms and institutions needed for global AI governance.