巴黎人工智能行动峰会及其框架下的峰会活动于2025年2月6日至11日举行,会议旨在加强国际社会的行动力,探讨人工智能的应用和全球治理问题。由托尼·布莱尔全球变革研究所(Tony Blair Institute for Global Change, TBI)主办的“人工智能时代的治理论坛”是其间的边会之一,意大利前总理、TBI战略顾问马泰奥·伦齐(Matteo Renzi)致开幕词。
傅莹应邀在“推进人工智能安全科学”一节担任嘉宾,其他嘉宾包括蒙特利尔大学教授、蒙特利尔学习算法研究所创始人约舒亚·本乔(Yoshua Bengio)和普林斯顿高等研究院教授阿隆德拉·纳尔逊(Alondra Nelson),大家进行了严肃而不失幽默的热烈讨论。之后傅莹在闭门晚宴上发言,并在峰会期间与各国专家学者就共同关心的问题交流。
关于中国人怎么看AI安全问题,为何没有建立“人工智能安全研究所(AISI)”,傅莹说,中方已经在政府支持下成立了对应AISI的“中国人工智能发展与安全研究网络”。中国在AI技术应用和安全治理领域存在一个多元化和多样化的生态环境,有多个政府部门和机构、企业在关注和投入安全治理问题。建立这样的网络而不是一个研究所,是为了让大家都能参加进来,分享知识、信息,增强能力建设,同时积极参加国际对话合作。中方参加了英国的布莱切利峰会,并一直在关注各国AISI进程,中国科技界也与各国同行保持密切沟通。
关于中方怎么看人工智能技术的发展和应用带来的风险时,傅莹说,在中国,人们对AI风险的关注大致在两个层面。一个是技术应用中的风险。中国政府2017年制定了《新一代人工智能发展规划》,提出AI安全、可控与可持续发展的要求。目前AI已在经济、金融、城市管理、医疗和科研等领域全面应用, 当然,风险与挑战也是存在的,对管控风险和对相关应对技术的需求都很迫切。中国政府在网络管理法律法规的实践基础上,就AI管理陆续出台法律和规范性文件,基本思路是在鼓励创新和防范、管控风险之间保持平衡。同时,一批专门应对AI安全问题的技术创新企业也应运而生。
另一个是技术发展中的风险。中方着眼未来风险积极参与国际合作。中国政府2023年10月发布了《全球人工智能治理倡议》,强调智能向善,主张建立风险等级测试评估体系。中方签署了《布莱切利宣言》。2024年7月1日,在第78届联合国大会期间,中国提出的加强人工智能能力建设国际合作决议获得协商一致通过,得到140多个国家支持。中方对AGI(通用人工智能)风险的理论和技术探讨也高度关注,中国科技界与国际同行在认识层面基本同步。约舒亚·本乔(Yoshua Bengio)领衔撰写的《国际人工智能安全报告》在中国受到关注,报告中提到“社会和政府在如何应对AI不确定性的决策,将决定人类走向何方”值得人们深思。
被问到中美关系和两国能否在AI上开展合作时,傅莹表示,当前地缘政治的干扰令许多人对中美AI合作不乐观。她回忆2019年亨利·基辛格(Henry Kissinger)与埃里克·施密特(Eric Schmidt)在北京参加AI安全圆桌会时,谈到对AI未来风险的担忧。她曾回应说,如若中美携手与世界各国合作,人类应该有可能管控机器,但若各方相互排斥,甚至试图利用高智能机器搞对抗,机器赢的概率恐怕就更大了。
过去几年美国对华科技封锁和打压毒化了全球合作的氛围。如果将21世纪第三个十年的世界形势绘制成一幅平面图,可以看到科技创新指数级上扬的曲线,也可以看到中美两国关系波动的曲线,这两条线不可避免地发生了交叉。在这个科技大爆发的时代,人类最需要动员全部的智慧与能量开展合作之际,主要国家却试图关闭合作平台。
与此同时,可以看到两个平行的动态趋势。一是美国在虚拟世界的研发上领先,前沿技术公司和科技人员在AI技术上快速创新发展,并且拥有巨额的资本支撑;另一个是中国在现实世界的垂直应用上领先,正在推动广泛和深入的技术应用和创新,拥有强大的制造业和广阔的市场基础。从人类技术发展的历史进程可以自然地预期,这两股力量的结合是未来AI安全、负责任应用的最佳出路。然而,从过去几年的情况看,许多人认为前景似乎不那么光明。因此,地缘政治的干扰可以被认为是对AI安全需要关注的第三个层面。
中方对中美合作和全球治理保持相对冷静的积极态度,提出两国应该在分歧问题上尊重彼此核心利益和重大关切,坚持相互尊重、和平共处、合作共赢的原则。
When asked about how people in China view AI safety issues and why China hasn’t established an “AI Safety Institute (AISI)”, Fu Ying said, China has established the “Chinese AI Safety and Development Network” with government support, which is equivalent to other countries’ AISIs. China has a diverse and pluralistic ecosystem of AI technology application and safety governance, with multiple government departments, institutions, and enterprises focusing on and investing in AI safety issues. The aim of establishing such a network, rather than a research institute, is to allow everyone to participate, share knowledge and information, strengthen capacity building, and actively engage in international dialogue and cooperation. China participated in the UK’s Bletchley Summit and has been following the development process of various countries’ AISIs. China’s technological community also maintain close communication with their international counterparts.
When asked about how the Chinese view the risks brought by AI technology development and application, Fu Ying said, most people look at the safety issue on two levels. One is in application. The Chinese government released the “New Generation AI Development Plan” as early as 2017, emphasizing on safe, controllable, and sustainable AI progress. Currently, China’s AI applications are spreading fast in a comprehensive manner, including in economic, finance, urban management, healthcare, and science research. Risks and challenges have emerged simultaneously, with urgent demands for government regulations and for technical solutions to address these risks. Based on experience in cyber laws and regulations, the Chinese government has successively issued laws and normative documents for AI regulations, guided by the principle of maintaining a balance between encouraging innovation and mitigating risks. Meanwhile, a batch of tech companies specialized in addressing AI safety issues has also emerged.
The other is the risks accompanied by AI technology development. China released the “Global AI Governance Initiative” in October 2023, emphasizing beneficial principle of AI development and advocating for establishing risk level testing and evaluation systems. China also signed the Bletchley Declaration. On July 1, 2024, at the 78th UN General Assembly, a China-led resolution on enhancing international cooperation in AI capacity building was adopted by consensus, with the support of over 140 countries. China takes interest and pay high attention to international discussions about AGI risks, and China’s science and technology community maintains close communication with international peers and is broadly aligned in their thinking on AI safety issues. The “International AI Safety Report” led by Yoshua received much attention in China. The report’s statement that “it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take” was particularly thought provoking.
When asked about China-U.S. relations and how the two countries could cooperate on AI, Fu Ying said that, few see much promise for China-U.S. AI cooperation, as geopolitical tensions continue to cast a shadow over scientific collaboration. She recalled that in 2019, when Henry Kissinger and Eric Schmidt attended an AI safety roundtable in Beijing, they discussed concerns about the future risks of AI. She responded that, “as long as the U.S. and China can cooperate and work together with the rest of humanity, we’ll find a way to keep AI under control. But if countries remain at odds and even use advanced AI systems against each other, the machines are more likely to gain the upper hand.”
Past years have witnessed consistent US efforts to block China’s technological progress, poisoning the atmosphere for cooperation. If we mapped the global landscape of the third decade of the 21st century, we would see an exponential curve of technological innovation rising steeply. At the same time, we would also observe the downward trajectory of China-US relations. The extension of these two lines has led to an intersection. In this era of technological explosion, when humanity most needs to mobilize all wisdom and energy for cooperation, some major countries are attempting to shut down collaborative platforms.
The phenomenon has led to two trends. One is the American tech giants’ lead in the virtual world with rapid progress in cutting-edge AI innovation, supported by enormous capital. The other is China’s lead in the real-world with wide application of AI technology, driving wide and deep technological applications and innovations, backed by powerful manufacturing and a vast market. If history is a guide, one can naturally expect that the combination of these two forces would be the best path forward for the safe and responsible application of AI. But given the past few years, many see no such prospect. Therefore, geopolitical interference can be considered a third level of concern regarding AI safety.
China maintains a relatively calm attitude toward China-U.S. cooperation and global governance, advocating to respect each other's core interests and major concerns on divergent issues, adhering to the principles of mutual respect, peaceful coexistence, and win-win cooperation.
Regarding the international debate on open-source or closed-source path for AI development, Fu Ying noted that from the perspective of China’s academic community and enterprises, open-source, despite its risks, is beneficial for identifying and addressing safety vulnerabilities in a timely manner, and aligns with the Chinese belief of developing AI technology to benefit the people. In comparison, the current complete opacity of some large companies’ models is more concerning. Yoshua believed that open-source AI technology could be misused by bad actors. However, he also acknowledged that using open-source architecture made it easier to identify potential problems.
(Fu Ying is the former Vice Minister of the Ministry of Foreign Affairs of China. This article is based on her speech and remarks during the AI Action Summit in Paris.)