Ciss Organizes Sub-Forum On “Artificial Intelligence: Technology And Governance” During The 2019 World Peace Forum

2019-08-10

Organized by the CISS, the Sub-Forum on “Artificial Intelligence: Technology and Governance” of the 2019 World Peace Forum was held on July 9, 2019 in Beijing. Attendees of the sub-forum explored the impact of AI on humanity, the future development, potential risks and challenges of AI and AI governance, among other issues.

The event was moderated by Fu Ying, Chair of the CISS, attended and addressed by a number of celebrated experts and business leaders at home and abroad. Meanwhile, the latest social research report by the AI project team of the CISS, Risks and Governance of AI from the Perspective of Chinese Youth, was officially unveiled at the event.

5d5b538363ef2.png

 (Fu Ying moderated the session) 

In recent years, AI has gradually aroused wide concern in the international community. The opportunities, challenges, ethical and legal issues brought by the R&D and application of AI technology have sparked intense discussions in various fields and disciplines. Fu noted that, since every significant technological breakthrough would bring with potential risks and challenges, AI is now viewed as a disruptive technology which can impact the mankind’s future, and it has been increasingly urgent to develop norms in this regard. “China has been leading the world in this field, as evidenced by the Beijing AI Principles and the Principles of New Generation Artificial Intelligence Governance: Developing Responsible Artificial Intelligence, which were published recently,” said Fu. “However, the impact of AI technology is not confined to the borders of a single country. Instead, relevant risks and challenges can cross the borders and become a worldwide problem. Therefore, the international community should explore a path for international governance of AI through dialogue and joint efforts.”

Zhang Bo, academician of the Chinese Academy of Sciences (CAS) and Director of the Institute for Artificial Intelligence, Tsinghua University, spoke of two types of security challenges associated with AI: one is triggered by the currently immature AI technology; and the other is resulted from the fact that the new technology is a “double-edged sword”. He observed that we may cope with the challenges by focusing on three aspects, namely the self-discipline of mankind, rules and regulations and security technologies. Meanwhile, he expected to see the collaboration among different countries, disciplines, scientific research establishments and sectors, so as to prevent the abuse of AI and maximize its benefits to the well-being of human society.

5d5b53c301367.png

(Zhang Bo, academician of the CAS and Director of the Institute for Artificial Intelligence, Tsinghua University, delivered a keynote speech)

John C. Havens from the Institute of Electrical and Electronics Engineers (IEEE) emphasized the importance of “standards” in the field of AI. As the Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, he noted that well-established standards should allow broad involvement and ought to be developed jointly by experts and practitioners from every walk of life, more young people, as well as people from different countries and cultural backgrounds. Based on the argument of Havens, Wendell Wallach, Senior Advisor to the Hastings Center and Head of the working research group on Technology and Ethics of Yale University’s Interdisciplinary Center for Bioethics, added his opinions on how to put standards, norms and recommendations in practice in real terms.

Chinese scholars reviewed the Beijing AI Principles and the Principles of New Generation Artificial Intelligence Governance: Developing Responsible Artificial Intelligence published not long ago. They were greatly concerned about the interactions between China and the international community in this field.

“The world’s major economies, such as the US, Europe and Japan, have all rolled out their AI strategies,” said Luan Qun, Director of the Research Institute of Policy and Regulation of the China Center for Information Industry Development (CCID Academy for Industry and Information Technology). “But I believe that we need to proceed from the underlying principles and rules in the Chinese culture.” As one of the major drafters of the Principles of New Generation Artificial Intelligence Governance, Zeng Yi, Deputy Director of the CASIA’s Research Center for Brain-inspired Intelligence, noted that the principles for global governance of AI are essentially people-oriented. “We need a strategic design to guide the development of AI in the next 30 to 40 years,” Zeng stated. “We need to share China’s own philosophy, thinking and ideas with the rest of the world, so as to jointly forge a shared future.”

Resonating with the concept of “harmony” from the traditional Chinese philosophy, which was also clarified by Zeng, Professor Helen Meng from the Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong said, “Mankind and AI can build up a symbiotic relationship, just like how clownfish and anemones depend on and protect each other.” Based on research programs on AI and linguistic cognition, Meng hoped that humans can create “the best tools” with AI so as to serve social well-being.

Two rounds of panel discussions were held at the sub-forum, during which 16 experts and scholars from different sectors at home and abroad had discussions on AI and international governance and proactively interacted with the press, observers, students and other audience of the event.

5d5b541cefd39.png

(Gregory Allen talked about how to make AI safer)

Experts and scholars from different countries shared a variety of views on the development of AI technology. Gregory Allen, former Adjunct Senior Fellow at the Center for a New American Security (CNAS), drew an analogy with electricity. He said that when electricity was first put into use, it brought about a lot of problems, including a great number of accidents and fires. However, as people kept improving its technology and, in particular, focused massive attention and efforts on the safe application of electricity, it has been a quite safe technology now. Therefore, people need to reflect more on how to make AI safer in the future. Malcolm Forster, Distinguished Professor of the School of Philosophy, Fudan University, argued that the concepts and words used to describe the AI technology and the way to interpret it should be accessible and understandable to the public. He believed that, only based on “understanding” can we develop better policies and achieve better governance. Professor Sun Maosong, Executive Deputy Director of the Institute for Artificial Intelligence, Tsinghua University, pointed out that AI is not “omnipotent”, and even “AlphaGo” can only gain the upper hand when it performs one single task within certain confines or under specific circumstances. He held that, just like machine translation which can be used by tourists but rarely seen in diplomatic occasions, the tactic of “doing things when least expected” from the Art of War can be imparted to only humans rather than machines and there are many things which cannot be done by AI. He went a step further in saying that discussions on AI are valuable, but we should not be ahead of the time in formulating policies and norms.

The integration of AI and military capabilities also drew much attention. Lora Saalman, Senior Researcher of the Stockholm International Peace Research Institute (SIPRI), stated that the military application of AI is also a “double-edged sword”. While boosting the military’s capacity for early-warning, defense, reconnaissance and simulation to enhance stability, its features of “machine learning” and “autonomous capability” can also improve non-strategic nuclear weapons and missile systems, promote arms races, intensify asymmetry and thus result in instability. Tom Stefanick, Visiting Researcher of the Foreign Policy Program at the Brookings Institution, believed that AI in the modern era is unlikely to change most military systems in the foreseeable future, and with continual international cooperation on basic AI and algorithm research, its development will not bring about a huge gap in military strengths. Paul Scharre, Senior Researcher and Director at the CNAS, emphasized the importance of “humans”. He believed that human brains are the best and most advanced cognitive systems, and human beings, rather than machines, should maintain control of AI in the process of military decision-making.

Cybersecurity is also a major concern. Wu Shenkuo, Executive Director of the International Center of Cyber Law, Beijing Normal University, held that new technological attacks, new content attacks and crimes will be the cybersecurity challenges brought by AI. Rand Waltzman, Deputy Chief Technology Officer of Rand Corporation, spoke of worries about hackers. He said that, according to his experience in the AI field over the past three decades, the biggest problem is how to stop hackers. Although people seldom talk about the issue in relevant discussions, its risks are immense and therefore unignorable.

Based on their enterprises’ innovation practices, Yu Kai, founder and CEO of Horizon Robotics, and Yang Peng, Director of Tencent’s Executive Committee for Information Security, delivered speeches themed on “trends of computing” and “sci-tech for good” respectively. They expressed the resolutions and wishes of enterprises and research agencies to join hands for making AI technology safer and more reliable.

5d5b5483e810a.png

(Yu Kai, CEO of Horizon Robotics, introduced “trends of computing”)

The sub-forum also provided young people with a platform to make themselves heard. Led by Yu Yang, Associate Professor of the Institute for Interdisciplinary Information Sciences at Tsinghua University, Teng Long and Jiang Jianping, two junior students from the Department of Electronic Engineering at Tsinghua University, released the latest social research report by the AI project team of the CISS, which was titled Risks and Governance of AI from the Perspective of Chinese Youth. They said that compared to experts and scholars who pay much attention to the application of AI in the political, military and security arenas, young people are more concerned about its economic and social impacts. Their research results show that young people mainly focus on the impact of AI on employment, consumption, personal information security, etc.

5d5b54a9d0319.png

(Report titled Risks and Governance of AI from the Perspective of Chinese Youth was unveiled at the event)

Guests, observers and other audience at the sub-forum also had heated discussions with experts and scholars who shared their viewpoints, and the topics included policy, education, environment and other aspects related to AI. As the moderator of the event, Fu expressed her gratitude to all guests who attended the event and participated in discussions as well as young people devoting themselves to the field, and hoped that their studies and discussions would be more intensive. “The forum held by Tsinghua University is called the World Peace Forum,” said Fu. “Despite the complicated situations, we have every reason to keep on moving as long as we all believe in peace.”   

5d5b54c8903d6.png

5d5b54e78f248.png

Journalist: Xie Mingqi







Last:Fu Ying Attends The 6th Seminar On International Communication In Yinchuan And Talks

Next:Symposium on “A Changing World and China-US Relations: Direction, Choice and Path”