Fu Ying And John Allen: Together, The U.S. And China Can Reduce The Risks From AI

2020-12-18
  • This article was produced by and originally published in Noema Magazine.

    Despite a deterioration in U.S.-China relations, both countries should seize this rare historical moment to develop common norms on AI.

    BY FU YING AND JOHN ALLEN

    DECEMBER 17, 2020

    Over the past 15 months, we helped lead a series of track-two (unofficial) dialogues on the development and application of artificial intelligence-enabled military systems, organized by the Center for International Security and Strategy at Tsinghua University, the Brookings Institution, the Berggruen Institute and the Minderoo Foundation. The series consisted of three day-long dialogues. The first was held in October of last year in Beijing and the second in February of this year in Munich, Germany. Due to travel restrictions related to COVID-19, the third dialogue was held virtually in October. We agreed to keep these discussions and the findings that flowed from them confidential until now.

    This starts with an essay by Fu and is followed by an essay by Allen.

    FU YING:

    BEIJING — In recent years, the rapid advance of artificial intelligence technology has brought about enormous opportunities, but technological revolution often comes with unforeseeable security challenges. In particular, the moral and technological hazards of weaponized AI call for attention.

    Some experts and scholars from around the world simply advocate for a blanket ban on developing intelligent weapons that can autonomously identify and kill human targets, arguing that such weapons should not be allowed to hold power over human life. As things stand, however, a global consensus on a total ban on AI weapons will be hard to reach, and discussions and negotiations, if any, will be protracted. Judging from the current rate of AI development, militarization is all but inevitable.

    It may be more viable to require the development of AI-enabled weapons to align with existing norms in international laws. Countries ought to work together on risk prevention for AI-enabled weapons, with the aim of finding consensus and building governance mechanisms together.

    During track-two discussions with our colleagues from the U.S., we focused on how to define the boundaries of “off-limits areas” for AI-enabled weapons, how to regulate such weapons in accordance with international laws and norms, and how to encourage restraint to restrict the abuse of AI data in military usage.

    The Military And Security Challenges Of AI

    There are several potential challenges of AI-enabled weapons systems. First, AI has inherent technical defects that may make it hard for attackers to restrict the range of their strikes, thus exposing the attacked to excessive collateral damage and potentially escalating conflicts. AI-enabled weapons should not only distinguish between military and civilian targets but also prevent causing excessive collateral or indirect damage to civilian targets. However, there is uncertainty over whether existing AI technology can ensure that those conditions can be satisfied in the use of force.

    Second, the AI technology wave, driven by machine learning, needs reams of data — but it is not yet possible to completely prevent algorithms and training datasets from introducing biases into the application of AI systems. Therefore, there is the risk of decision-makers being provided with the wrong recommendations. If a training dataset is contaminated by another country, leading systems to offer incorrect reconnaissance information, military decision-makers may make misguided judgments and carry out arbitrary military deployments.

    Third, the difficulties in human-machine collaboration is the ultimate challenge for the militarization of AI. Machine-learning and big-data processing mechanisms have limitations. The various approaches to AI — like behaviorist reinforcement learning, connectionist deep learning and symbolic expert systems — cannot accurately reflect human cognitive capabilities, such as intuition, emotion, responsibility and value. The military application of AI is an integrated human-machine-environment collaborative process, and the machine’s deficiencies in interpretability, learning and common sense will magnify the risks of battlefield conflicts and even lead to escalations of international crises.

    Pathways To AI Security Governance

    Both sides of the discussions agreed that countries need to exercise restraint in the military field to prevent humankind from suffering catastrophic damages from the weaponization of AI technology. Countries should prohibit assisted decision-making systems that are not cognizant of responsibility or risk. When using AI-enabled weapons, the scope of damage by such strikes must be limited in order to prevent collateral damage and avoid the escalation of conflict.

    In addition, public education should also reflect the need for military restraint. Since AI technology is prone to proliferation, it may end up in the hands of hackers who use it to endanger public security.

    How to align the use of AI-enabled weapons with the basic principles of international laws is another focus for security and governance. The U.N. charter stipulates that member states may not use force unless authorized by the Security Council or for the purpose of self-defense. Therefore, when states use force for self-defense, the intensity and scale of the use of force must be commensurate with the seriousness of the attack or threat.


    “Since AI technology is prone to proliferation, it may end up in the hands of hackers who use it to endanger public security.”

    During our discussions, Chinese experts in particular pointed out that countries should assume legal responsibilities and be proactive in promoting international norm-building on military actions involving AI. In the meantime, a threshold for human involvement needs to be determined to make sure that the use of intelligent weapons does not cause superfluous injury. Since AI-enabled weapons platforms can hardly tell what attacks are necessary, appropriate and balanced, the initiative of human commanders needs to be respected.

    Furthermore, the security of AI data must be guaranteed. Data mining, collection, labeling, classification, use and monitoring should be subject to norms and restrictions. The process and means of collecting intelligent weapon training data should comply with international law, and the amount of data collected should be required to reach a certain scale. The quality and accuracy of data labeling and classification should be ensured to avoid generating the wrong models, which could lead to decision-makers making the wrong judgements. And attention should be paid to the possibility that targets and data have become polluted.

    The Chinese side proposed developing some kind of grading of the degree of weapon autonomy. For example, AI weapons’ functional capabilities may be categorized into five levels: semi-automated, partially automated, conditionally automated, highly automated and fully automated. By clarifying different levels of autonomy, we may be able to better identify and ensure the role of humans and have more effective management and control of AI and autonomous weapon systems.

    China-U.S. Cooperation On Global AI Governance

    We have at this moment an important window of opportunity for establishing international norms on AI security. China and the U.S. are in a good position to carry out coordination and cooperation in this area, as the research into and application of AI technology in the two countries is growing rapidly. Security concerns about AI applications are also shared by other countries, which indicates that the challenges are common to humanity and cannot be solved by any one or two countries alone. Dialogue and cooperation between China and the U.S. can contribute wisdom to global collaboration in this area.

    It is therefore important that the two countries start official discussions on how to build international norms and regimes, explore areas of cooperation based on their respective interests and concerns, exchange and translate relevant documents, and carry out policy dialogue and academic exchanges so as to reduce the potential risks to bilateral relations and global security.


    “China is ready to work with the U.S. and the rest of the world on the governance of AI.”

    In recent years, China has been sending clear signals of cooperation. At the G20 on Nov. 21, President Xi Jinping proposed a meeting on AI to advance the group’s AI principles and set the course for the healthy development of AI globally. Around two months earlier, Foreign Minister Wang Yi proposed a “global initiative on data security” that would be open to other countries and support bilateral, regional and international agreements. He laid out three principles for effectively addressing the risks and challenges to data security: upholding multilateralism, balancing security and development, and ensuring fairness and justice.

    While developing AI technology, China is also actively promoting relevant governance construction domestically. China released a white paper on artificial intelligence standardization in 2018, which listed four ethical principles, including human interest, responsibility, transparency and consistency between rights and obligations.

    China is ready to work with the U.S. and the rest of the world on the governance of AI. It is our belief that technological breakthroughs should ultimately benefit all of humanity and should not be pushed into a zero-sum situation.


    JOHN ALLEN:

    WASHINGTON — In recent years, as the U.S. and China make advances in artificial intelligence, there has been a marked absence of any sustained government-to-government dialogue on AI and national security. Instead, the relationship between the two countries has deteriorated, leading to a breakdown in direct communication across a range of issues. What already was a latent security dilemma has grown in intensity, as both countries warily eye the other’s efforts to develop novel AI-enabled weapons systems and attempt to evaluate the implications of the other’s progress.

    From the outset of our track-two discussions, both sides agreed that the primary value of the dialogue would be in developing a better understanding of how each side reaches decisions on AI-enabled weapons systems — and, if possible, to identify areas where both sides might be able to agree upon common norms.

    The dialogues had four focus areas: 

    1. Off-limits targets. Do AI-enabled weapons systems need new limitations restricting their use against specific targets? Should there be any “AI-free zones” or geographic restrictions on their use?

    2. Proportionality and human oversight. How might countries avoid unintended military escalation by applying the principle of proportionality to AI-enabled platforms and involving human oversight?

    3. Off-limits data. What limits, if any, should there be on the type of data that is used to train AI-enabled weapons systems? Relatedly, should there be agreements among countries to prevent attacks that corrupt the data a machine-learning system is trained on and fool it into thinking that civilian targets are military ones or vice versa?

    4. International norm-building. What role, if any, should the U.S. and China play in international norm-building and risk-reduction efforts related to AI-enabled military systems?

    Several areas of agreement emerged. One shared concern is that modern AI systems based on machine learning often behave and fail in ways that are unexpected or poorly understood. Thus, we agree that we need more research into AI that can be more easily explained (known as “explainable AI”), and we need more testing and evaluation regimes. Other concerns include: the speed at which AI systems operate, which could lead to the rapid escalation of conflicts; the growing proliferation of AI-enabled weapons systems and the potential for non-state actors to exploit such systems; and the potential risks of such weapons to inflict civilian casualties, particularly if they mistakenly engage a target or are exploited by malicious actors.

    Both the U.S. and Chinese teams affirmed three important things. First, the use of AI-enabled weapons systems should comport with the principles of customary and international law, particularly the principles of distinction and proportionality. This would require both developers of the AI-enabled weapons systems, as well as the operators of the systems, to be trained and held responsible for ensuring full comportment with customary and international law.

    Second, AI-enabled weapons systems should operate under appropriate human oversight or control, particularly given the technical limitations of those systems. And third, we should hold further dialogues on the risks and challenges posed by AI and how to respond to them most effectively. 


    “A runaway arms race is not inevitable.”

    These shared understandings should be seen as reflecting the views of the American and Chinese experts involved and not necessarily their respective governments. Hopefully, though, officials in both capitals, over time, will be able to identify practical steps to manage risks relating to AI and national security. There is certainly more agreement than disagreement among national security technology experts in the U.S. and China over the risks and challenges posed by AI.

    Without sustained, candid, diplomatic dialogue between Washington and Beijing over how best to manage competition and associated risks with AI, narratives around a new technological arms race will add strain to an already highly competitive relationship.

    Yet a runaway arms race is not inevitable. The U.S. and China need not revert to the kind of brinksmanship with AI that led the U.S. and Soviet Union to create ever-larger — and potentially catastrophic — stockpiles of nuclear weapons at the height of the Cold War. Further discussions around the national security risks posed by AI — at both the governmental level and in non-governmental fora — have the potential to reduce the likelihood that those risks come to pass.


    “We have an opportunity to develop new norms, confidence-building measures and boundaries around acceptable uses of novel technologies.”

    As the leading producers of AI, the U.S. and China currently have a rare opportunity. Unlike with debates over cybersecurity, which started in earnest only after a global cyber infrastructure had been built out and exploited, AI is still under development. Indeed, most AI-enabled weapons systems are still relatively immature and have not yet been widely deployed; AI has yet to approach its full potential in national security applications and in conflict.

    We have an opportunity to develop new norms, confidence-building measures and boundaries around acceptable uses of novel technologies. Once such technologies are deployed and integrated into plans and doctrine, it will be more difficult to build support for restraints on their uses. The more deeply AI is embedded into military systems and applications before new norms and agreements are reached, the less willingness there will be to roll back any new capabilities they afford, particularly given how costly such systems are to develop.

    Both the government and expert communities in the U.S. and China would be wise to examine the opportunity that now exists to develop agreements around practical steps to reduce national security risks, rather than defer such discussions into the future. By then, the security dilemma may intensify efforts to accelerate the development of new technologies to offset the advances of the other, causing the window of opportunity to swing shut for meaningful discussion around risk reduction. 

    Fu Ying is the chairperson of the Center for International Security and Strategy at Tsinghua University. She was previously vice minister of foreign affairs of China.

    John Allen is president of the Brookings Institution, a retired U.S. Marine Corps four-star general and former commander of the NATO International Security Assistance Force and U.S. Forces in Afghanistan.

    To read the original essay and other similar essays, visit noemamag.com.


    Last:ZHOU Bo: China and US may be rivals, but cooperation is possible and desirable

    Next:Zhou Bo: Why Joe Biden will struggle to rebuild the decaying transatlantic alliance to counter China