Seminar on the Responsible Use of Technology in Humanitarian Action Successfully Held

2025-12-16

From December 4 to 5, 2025, the Seminar on the Responsible Use of Technology in Humanitarian Action was held in Beijing. The event was co-organized by the Center for International Security and Strategy (CISS), Tsinghua University, and the International Committee of the Red Cross (ICRC). More than 80 experts and scholars from China, the Netherlands, Switzerland, Sweden, Luxembourg, Russia, the Republic of Korea, and other countries participated in the discussions.

The seminar featured one plenary forum and four parallel sessions focusing respectively on responsible artificial intelligence, technology for good, artificial intelligence safety, and digital trust.

微信图片_2025-12-19_153325_201.jpg

The seminar officially opened on the morning of December 4. The opening remarks and keynote session were moderated by Xiao Qian, Deputy Director of CISS. Yang Bin, Vice Chair of the University Council of Tsinghua University, and Balthasar Staehelin, Personal Envoy to China of the ICRC President and Head of the ICRC Regional Delegation for East Asia, attended the opening ceremony and delivered speeches.

“Throughout the course of human civilization, technology has always been a driving force for social progress. Yet precisely because of its immense potential, we must ensure that technology is used responsibly,” Yang Bin noted. He highlighted China’s and Tsinghua University’s efforts in advancing artificial intelligence ethics, governance, and technology for social good. He expressed confidence that the seminar would provide a valuable platform for participants to exchange views and share experiences, and to reach common understanding on the responsible use of technology in humanitarian action.

微信图片_2025-12-19_153321_622.png

Staehelin stated, “We look forward to jointly exploring technological solutions that can support humanitarian action, engaging in dialogue and advancing progress on how technologies can be responsibly developed and used for the benefit of humanity.” He noted that China’s leadership in the technology sector presents significant opportunities for humanitarian action. Through cooperation, stakeholders can develop solutions that prioritize humanity, safety, and dignity while addressing the risks posed by emerging technologies.

微信图片_2025-12-19_153334_156.png

The keynote speeches that followed were delivered by Gong Ke, Executive Dean of the Chinese Institute for New Generation Artificial Intelligence Development Strategies; Dai Huaicheng, Secretary-General of the China Arms Control and Disarmament Association; Els Debuf, Head of the ICRC Global Affairs Network in Luxembourg; and Zeng Yi, Founding Dean of the Beijing Institute of AI Safety and Governance.

Gong Ke proposed several practical pathways for the responsible use of artificial intelligence in humanitarian action, including prioritizing applications that address urgent humanitarian needs, ensuring the inclusiveness and representativeness of datasets, establishing global incident reporting and learning systems, and building accountability mechanisms.

Dai Huaicheng introduced China’s recently released white paper China’s Arms Control, Disarmament and Non-Proliferation in the New Era, noting that it is the first document to systematically elaborate China’s policy positions on military applications of artificial intelligence. The white paper advocates the vision of a community with a shared future for mankind, emphasizes the balanced development of security and innovation, and calls for ensuring that military applications of AI are safe, reliable, and controllable. It also stresses that such applications should comply with international humanitarian law and other applicable international legal frameworks.

Debuf emphasized the ICRC’s commitment to a principled digital transformation through its institutional strategy, its AI Policy, and the resolutions of the Red Cross and Red Crescent Movement on protecting civilians from digital harm in armed conflict. She underscored the need for closer collaboration between the humanitarian and technology sectors, calling for humanitarian ethics to be embedded at the design stage of new technologies and tools, as retrofitting ethical considerations later can make integration far more complex and difficult.

Zeng Yi warned that while AI systems are becoming increasingly powerful, corresponding safety measures have not kept pace. Findings from multiple evaluations he has led or participated in indicate that large AI models are vulnerable to attacks, including in high-risk scientific and military domains. He emphasized that, in principle, absolute safety cannot be guaranteed through digital proof alone. He called for the establishment of clear international “red lines” for unacceptable uses of AI and urged coordinated global action to prevent catastrophic risks and safeguard international peace and security.

微信图片_2025-12-19_153329_488.png

The opening panel discussion was moderated by Balthasar Staehelin. Participants included Chen Qi, Deputy Director of CISS; Andrea Cavallaro, Director of the Idiap Research Institute and Professor at the École Polytechnique Fédérale de Lausanne (EPFL); Li Xiaodong, Founder and Director of the Fuxi Think Tank; and Blaise Robert, Global AI Adviser for Digital Transformation and Data at the ICRC. The discussion focused on the opportunities and challenges presented by emerging technologies in humanitarian action.

微信图片_2025-12-19_153338_323.png

Parallel Session I, Responsible Artificial Intelligence: Ethics, Safety, and Accountable Governance, focused on global AI safety governance frameworks under the United Nations system. Discussions addressed responsibility across the entire AI lifecycle, with particular attention to accountability matrices in the military domain and crisis response mechanisms. Participants also engaged in in-depth exchanges on key issues such as managing dual-use risks, the choice between open-source and closed-source approaches, and narrowing the global digital divide through technology sharing.

微信图片_2025-12-19_155049_827.jpg

Parallel Session II, Technology for Good: How Technology Can Support Efficient and Coordinated Crisis Response, examined how to ensure that “technology for good” translates into concrete action to protect vulnerable populations in crisis situations. Discussions centered on the practical implementation of initiatives led by technology companies.

微信图片_2025-12-19_155031_469.jpg

Parallel Session III, AI Safety and Security: Feasible Safeguards for Responsible Deployment in Crisis Environments, explored diverse application scenarios of AI in crisis settings and the associated risks. The session focused on concrete risk mitigation recommendations and safeguard measures for the use of AI in humanitarian action, particularly in relation to model procurement, data use, and data governance.

微信图片_2025-12-19_155009_112.jpg

Parallel Session IV, Digital Trust: Building and Sustaining Confidence in an Interconnected World, focused on how to build a sustainable, cross-system, secure, and trustworthy digital humanitarian space in a highly interconnected digital environment.

微信图片_2025-12-19_154942_547.jpg

Experts and scholars from international organizations—including United Nations agencies—universities, think tanks, and industry participated in the discussions. The seminar aimed to further consolidate international consensus on the responsible use of technology, promote the formation of cross-sectoral and cross-national cooperation networks, and jointly explore pathways and solutions for responsible technology use in humanitarian action. The ultimate goal is to better empower vulnerable populations through technology and contribute the power of technology for good to the advancement of global humanitarian efforts.

微信图片_2025-12-19_154917_059.jpg

Last: Viet Nam Union of Friendship Organizations Visited CISS

Next:The 13th China–U.S. Dialogue on Artificial Intelligence and International Security Successfully Held