2月12日,《南华早报》整理并发表了中国外交部前副部长傅莹在巴黎人工智能行动峰会边会上的演讲英文稿《人工智能安全合作应超越地缘政治干扰》(Cooperation for AI safety must transcend geopolitical interference)。傅莹认为,面对人工智能安全合作的挑战,中国通过构建多元共治生态、推动国际倡议以及平衡创新与风险监管的实践经验,为解决这一问题提供中国智慧。她提到,当前地缘政治的干扰使得许多人对中美AI合作持悲观态度,但中方对此保持相对冷静的态度,倡导在分歧上尊重彼此的核心利益和重大关切,并坚持相互尊重、和平共处、合作共赢的原则。
Amid a tech explosion, when we need to mobilise all our wisdom and energy for cooperation, some countries are trying to shut down collaborative platforms.
As the global community grapples with artificial intelligence (AI) governance, a crucial dialogue is emerging on how different nations approach AI safety and development. At a recent Paris AI Action Summit side event, these discussions highlighted the urgent need for international cooperation despite geopolitical headwinds.
Drawing on my participation in international AI forums and multilateral discussions, I believe China’s experience offers valuable insights into balancing technological advancement with safety considerations while highlighting the challenges and opportunities in global AI governance.
China’s approach to AI safety reflects its national reality and interests. The AI Safety and Development Network, established with government support, is equivalent to other countries’ AI safety institutes. China has a diverse and pluralistic ecosystem of AI application and safety governance, with government departments, institutions and enterprises focusing on and investing in AI safety issues.
The network’s establishment enables every stakeholder to participate, share knowledge and information, and enhance capacity building; there is active participation in international dialogue and cooperation in spite of external challenges. China joined Britain’s Bletchley summit and has followed the development of other countries’ AI safety institutes. China’s technological community also maintains close communication with international counterparts.
The concern about AI safety in China is mostly on two levels. One is in application. China’s State Council released the New Generation AI Development Plan in 2017, emphasising safe, controlled and sustainable AI progress. China’s AI applications are spreading fast, including in finance, urban management, healthcare and scientific research. Risks and challenges have emerged, with urgent demands for government regulation and technical solutions to address these risks.
Based on its experience in cyber regulation, the Chinese government issued AI laws and normative documents, guided by the principle of maintaining a balance between encouraging innovation and mitigating risks. Meanwhile, tech companies specialising in addressing AI safety issues have emerged to develop and deploy risk mitigation.
The second level is about future AI risks and international cooperation. China released the Global AI Governance Initiative in October 2023, emphasising the beneficial principle of AI development and advocating for risk testing and evaluation systems. China also signed the Bletchley Declaration.
On July 1, 2024, at the 78th UN General Assembly, a China-led resolution on enhancing cooperation in AI capacity building was adopted by consensus, with the support of over 140 countries. China pays close attention to international discussions on the risks of generative AI. Clearly, leading figures in China’s science and technology community maintain close communication with their international peers and are broadly aligned in their thinking on AI safety.
Global AI cooperation requires China and the United States to work together but the tension in their relationship presents a challenge. Realistically, few can see much promise as geopolitical tensions continue to cast a shadow over scientific collaboration.
In 2019, Henry Kissinger and Eric Schmidt, who attended the China Development Forum, joined an AI safety discussion in Beijing. Dr Kissinger expressed his concern on AI safety and asked about the Chinese view. I responded that “as long as the US and China can work together with the rest of humanity, we should be able to find a way to keep AI power under control. But if the countries remain at odds and even use advanced AI systems against each other, the machines are more likely to gain the upper hand.”
Past years have witnessed consistent US efforts to block China’s technological progress, poisoning the atmosphere for cooperation. If we mapped the global landscape of the third decade of the 21st century, we would see an exponential curve of technological innovation rising steeply. At the same time, we would also observe the downward trajectory of China-US relations. The extension of these two lines has led to an intersection, one that should never have happened but has come to be seen as almost inevitable.
In this era of technological explosion, when humanity most needs to mobilise all wisdom and energy for cooperation, some major countries are attempting to shut down collaborative platforms.
The phenomenon has led to two trends. One is the American tech giants’ lead in the virtual world with rapid progress in cutting-edge AI innovation. The other is China’s lead in the real world with its wide application of AI. Both forces have strong momentum, with the former supported by enormous capital and the latter backed by powerful manufacturing and a vast market.
If history is a guide, one can expect the combination of these two forces to be the best path for the safe and responsible application of AI. But given the past few years, many see no such prospect. Therefore, geopolitical interference can be considered a third level of concern regarding AI safety.
China maintains a relatively calm attitude towards China-US cooperation and global governance. In hisphone call with then president-elect Donald Trump, Chinese President Xi Jinping emphasised that China and the US share extensive common interests and a broad space for cooperation.
Chinese companies predominantlyfavour open-source AI development. During our panel discussion, Yoshua Bengio, a leading expert in AI, had reservations. The way I understand it, open-source aligns with the Chinese belief of developing AI to benefit people. It carries risks but in practice enables a more timely discovery of safety vulnerabilities and prompt technological improvements.
The opacity of some large companies’ models is more concerning. Professor Bengio believed open-source AI could be misused by bad actors. But he also acknowledged that from a safety perspective, open-source architecture made it easier to identify potential problems.
I mentioned at the panel that Professor Bengio’s recently published International AI Safety Report received much attention in China. The report’s statement that “it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take” was particularly thought-provoking.
This is an edited version of her speech at the side event hosted by the Tony Blair Institute for Global Change during the Artificial Intelligence (AI) Action Summit in Paris on February 9.