On December 18, 2020, the Center for International Security and Strategy (CISS), Tsinghua University organized a thematic session of Tsinghua University International AI Cooperation and Governance Forum 2020 titled “AI and International Security: Challenges and Opportunities.” The session highlighted the preliminary outcomes of AI and International Security, a joint research project between CISS and the Brookings Institution which is supported by the Berggruen Institute and the Minderoo Foundation.
Mme. FU Ying, Chair of CISS and Honorary Dean of the Institute for AI International Governance of Tsinghua University, and John Allen, President of the Brookings Institution, delivered keynote speeches on the theme of the session. LI Bin, Senior Fellow at CISS and Professor of International Relations at Tsinghua University, gave a detailed presentation on the joint research and the Chinese team’s insights on the risks and governance of AI-enabled weapons. Other project members included HAO Yinghao, Senior Engineer at the Academy of Electronics and Information Technology of China Electronics Technology Group; LI Chijiang, Vice President and Secretary-General of the China Arms Control and Disarmament Association; Nils Gilman, Vice President of Programs at the Berggruen Institute; Chris Meserole, Fellow at the Brookings Institution; and Tom Stefanick, Visiting Fellow at the Brookings Institution. Respectively, they commented on the “off-limits areas” for AI-enabled weapons, data security risks, challenges to international law, and U.S.-China cooperation on global governance. The thematic session was moderated by XIAO Qian, Deputy Director of CISS.
In his keynote speech, John Allen noted that the strategic competition between the U.S. and China in AI and other novel technologies has become increasing visible, and narratives of an “AI arms race” have not been in short supply. To avoid a runaway arms race, Allen expressed his hope that the security concerns discussed in the AI and International Security project will be further explored at both the governmental level and in non-governmental fora, particularly with regard to the national security risks posed by AI, in order to reduce the likelihood that those risks come to pass. As we are at a critical juncture in regulating the development and application of these technologies, Allen called on both the government and expert communities in the U.S. and China to examine the opportunity that now exists to develop agreements around practical steps to reduce national security risks posed by AI.
Mme. FU Ying focused her keynote speech on China’s position on international cooperation regarding AI governance, recalling Chinese President XI Jinping’s message at the G20 Summit this November that emphasized China’s support for further dialogues and meetings on AI to push for the implementation of the G20 AI Principles and set the course for the constructive development of AI globally. This September, Chinese State Councilor and Foreign Minister WANG Yi proposed a “global initiative on data security” in hopes that the international community would reach an agreement on AI security on the basis of universal participation to support the affirmation of the commitments in that initiative through bilateral or multilateral agreements. He laid out three principles for effectively addressing data security risks: upholding multilateralism, balancing security and development, and ensuring fairness and justice.
As pointed out by Mme. FU Ying, the joint U.S.-China research project on AI security governance is a success story from which both sides have benefited greatly, and the findings are particularly relevant today. As we come to terms with the inevitable weaponization of AI technology, pathways to AI security governance must be identified. We must never forget to take lessons from history. Our consensus on the governance of nuclear weapons, for instance, came too late, once posing an immense threat for all humankind. Another lesson learned is the early absence of cybersecurity governance. This time, we hope that the governance of AI technology, especially AI-enabled weapon systems, will stay ahead of the technological advances, and that we would fully understand their risks and reach an earlier consensus on their governance. Pugwash is a name that often came up in the discussions about global AI governance. The Pugwash Conferences on Science and World Affairs is a reputed institution for its expertise in the governance of nuclear weapons and has played an positive role in nuclear arms control. A similar approach could be taken for AI governance by establishing a Select Committee on AI International Governance that brings together researchers, government and policy professionals.
During his presentation on the progress and findings of the Chinese team’s research, Team Lead LI Bin argued that the basic norms of international law should be embedded into the design and deployment of AI-enabled weapons to ensure compliance with international legal norms when using such weapons during armed attacks. Speaking of the application of the Law on the Use of Force and the Law of Armed Conflict, he advocated for the use of the basic principles of international law in steering the development of AI-enabled military technology. On top of that, both developers of AI-enabled weapon systems, as well as the operators of the systems, are required to be trained to ensure better understanding of and full comportment with the principle of proportionality in the Law on the Use of Force and the Law of Armed Conflict.
In the panel session, participants took an in-depth look at how the U.S. and China might cooperate in AI governance, and agreed that both are major AI powers who need to guard against the risks from AI-enabled military technology and improve its governance. The abuse or misuse of AI technology could send shockwaves to strategic stability, change the rules of engagement, and raise military risks. The U.S. and China share a strong need for risk management as well as a range of common interests, creating tremendous cooperation potentials in international AI governance. As noted by some panelists, the use of AI must comply with the principles of proportionality and distinction in international law during data collection, algorithm training, and battlefield operations. AI-enabled weapons must not be used to attack or cause excessive harm to civilians or civilian facilities. In this connection, Chinese experts proposed to borrow the “traffic light” rule for data collection to prohibit AI-enabled weapons from attacking civilian targets and restrict their targets to fully identified military ones, thereby securing greater consonance with international law.
The thematic session concluded with a shared agreement among the participants that countries around the world, including the U.S. and China, should build trust and clear any misgivings through policy and document exchanges, as well as academic dialogues to strive for a consensus on global AI security governance.