Xiao Qian: Steps toward AI governance in the military domain

2025-11-18

Sisson has made commendable efforts in developing a hierarchy to prioritize which initiators and responses to govern and which the United States and China may be able to agree on as one of the first steps toward AI governance in the military domain. Her work is both admirable and deserving of respect.

I share her view that China and the United States, as two major countries of AI development and diffusion, have a special responsibility to govern AI in the military domain and to prevent it from harming civilians. However, I am also acutely aware of the significant challenges that hinder genuine dialogue and cooperation between the two nations under the current climate of strategic rivalry. To move forward, both sides must break free from the existing security dilemma.

The concept of the security dilemma was proposed by American political scientist John Herz in 1950 and describes a situation in which states, in pursuing their own security, inadvertently undermine the security of others. The absence of mutual trust often leads to arms races, escalating tensions, and potential conflict. Its root cause lies in uncertainty about others’ intentions and the pervasive sense of insecurity inherent in an anarchic international system.

According to Herz, the security dilemma involves several defining elements:

  1. An anarchic international environment lacking authoritative institutions to regulate state behavior or ensure protection.

  2. Mutual suspicion and fear arising from uncertainty about other actors’ intentions.

  3. Competitive behaviors and security measures adopted in pursuit of self-protection.

  4. A resulting decrease in the overall security of all actors involved.

The ongoing competition between China and the United States in the field of artificial intelligence exhibits many characteristics of such a security dilemma.

First, despite the rapid advancement of artificial intelligence, there is still no unified or binding international treaty governing AI technologies. Global AI governance remains fragmented and regionally driven, with no universally authoritative institution in place. Major powers differ widely in their core concepts, value orientations, and regulatory approaches, leaving the international system in a state of governance anarchy.

Second, mutual suspicion, fear, and uncertainty about intentions have long shaped the trajectory of U.S.-China relations. Since the early 21st century, China’s rapid economic and military rise has sparked intense debate within the United States on how to respond. Beijing’s launch of “Made in China 2025,” its active promotion of the Belt and Road Initiative, and its increased investment in high-tech industries prompted U.S. policymakers to view China with growing caution, fearing a challenge to U.S. global leadership.

In the technological domain, Washington’s AI Action Plan explicitly emphasizes “winning the AI race” against China. To this end, the United States has broadened its definition of national security, strengthened foreign investment screening, tightened export controls on critical technologies, restricted technology transfers to China, and placed several Chinese firms on the Entity List—all in the name of safeguarding national security.

From China’s perspective, these actions are viewed as efforts to contain its AI development, preserve U.S. hegemony, and deny China its legitimate right to technological progress. Within the broader context of strategic rivalry and geopolitical competition, such deepening mistrust and uncertainty have severely narrowed the space for dialogue and cooperation on AI governance between the two powers.

Third, the world has entered an era of profound uncertainty and systemic risk. Regional conflicts and geopolitical tensions have intensified—the Ukraine crisis continues to undermine European security, while the Israel-Palestine conflict has further destabilized the Middle East. Simultaneously, emerging technologies such as AI, 5G, and quantum computing are reshaping global power structures. Non-traditional security threats—including climate change, terrorism, cybersecurity, and public health challenges—are exerting increasing influence on international politics.

In this context, many countries are reassessing the economic and security risks associated with globalization and are turning toward protectionist or security-oriented policies, thereby weakening global supply chain resilience and eroding transnational cooperation. The growing tendency of states to prioritize security competition has become a defining feature of global politics.

Fourth, the strategic rivalry between China and the United States—exemplified by Washington’s policy of constructing “small yards with high fences” and forming exclusive technology alliances with like-minded partners—has not enhanced global security. Both the Reagan National Defense Survey and the Munich Security Report 2025 reveal a widespread sense of insecurity across nations. Meanwhile, the global technological ecosystem’s fragmentation has further complicated international cooperation, heightening the risks of intensified AI competition and potential loss of control over technological development.

Sisson has, as a matter of fact, touched upon the security dilemma in her article, when she observed that there is no “Method for creating a hierarchy to prioritize which initiators and responses to govern,” and “There is no widely accepted analytical means through which to estimate the probability that any one, or even that any broad category, of AI-powered military crisis will occur.” And her proposed three methods for defining the hierarchy have given some hints on how to move beyond the security dilemma.

It is fair to say that any confidence-building measures between China and the United States on AI governance require efforts to move beyond the security dilemma, and in the current context, rebuilding basic mutual trust is the only viable path forward. Both sides must take concrete measures and make incremental adjustments.

Here are some of the suggested steps:

Reframing perceptions and managing competition.

Both countries need to reassess how they perceive and position each other in the AI domain. While a certain degree of strategic competition may be unavoidable, it is essential to avoid framing it as a zero-sum or existential struggle. Managed competition—grounded in transparency and predictability—can coexist with selective cooperation. The technology we are living with is characterized by a high degree of unpredictability and an unprecedented pace of development—traits unparalleled in human history. China and the United States, as two leading countries in AI development, need to find a way to communicate on such important issues as how to prevent existential risks and how to maintain human control. Initiating dialogue through a rigorously academic and technically grounded discussion—such as constructing a hierarchy of risks, as proposed by Sisson—offers a constructive starting point. Such an approach not only facilitates mutual understanding but also contributes to reducing misperceptions and misunderstandings between the two sides.

Establishing clear red lines for national security.

Defining and mutually recognizing security “red lines” is critical. Both sides should clearly delineate sensitive areas where cooperation is infeasible, while avoiding the tendency to continuously expand these boundaries. On this basis, they can pursue exchanges and cooperation in low-sensitivity areas, such as AI safety principles, best practices, and capacity-building efforts. Such engagement would enhance mutual understanding and foster interpretability and predictability in each other’s actions. At the same time, it is essential for both sides to keep official channels of communication open, even in times of crisis. The understanding reached between Xi and Biden—that human beings must always maintain human control over the decision to use nuclear weapons—sets a valuable precedent. It demonstrates that national security concerns should not preclude the two countries from achieving agreements on issues of existential significance that bear directly on the future of humanity.

Strengthening collaboration in AI safety.

Beyond the imperative of building trust between nations, we must also confront the challenge of cultivating trust in technology itself. The International AI Safety Report, led by Professor Yoshua Bengio, classifies the risks associated with general-purpose AI into three broad categories: malicious use risks, risks arising from malfunctions, and systemic risks. Addressing these challenges demands collective, coordinated, and truly global efforts. Without effective guardrails or commonly accepted standards, humanity will struggle to feel secure in its reliance on advanced technologies. Encouragingly, a growing community of scientists and experts—both from China and the United States, as well as from industry and academia—has already devoted itself to this shared endeavor, offering promising avenues for constructive collaboration across borders.

Advancing global AI capacity and development goals.

In 2024, China and the United States each cosponsored resolutions on AI at the United Nations General Assembly, setting a positive precedent for multilateral engagement. Building on this momentum, the two sides should continue to promote global AI governance under the U.N. framework and explore the creation of an authoritative international mechanism to regulate AI-related behavior and safeguard all actors. As the two leading powers in AI technology and innovation, the United States and China share a responsibility to contribute to AI capacity-building in developing countries, help narrow the global digital divide, and advance the United Nations Sustainable Development Goals. Joint initiatives in these areas would not only strengthen global governance but also demonstrate that responsible AI development can serve common human interests.


Next:DA Wei:American and China Can Have a Normal Relationship