(preamble)

"Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity." Global action, cooperation, and capacity building are key to managing risk from AI and enabling humanity to share in its benefits. AI safety is a global public good that should be supported by public and private investment, with advances in safety shared widely. Governments around the world — especially of leading AI nations — have a responsibility to develop measures to prevent worst case outcomes from malicious or careless actors and to rein in reckless competition. The international community should work to create an international coordination process for advanced AI in this vein.
Principle: IDAIS-Oxford, Oct 31, 2023

Published by IDAIS (International Dialogues on AI Safety)

Related Principles

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

(preamble)

Artificial intelligence (AI) is a new area of human development. Currently, the fast development of AI around the globe has exerted profound influence on socioeconomic development and the progress of human civilization, and brought huge opportunities to the world. However, AI technologies also bring about unpredictable risks and complicated challenges. The governance of AI, a common task faced by all countries in the world, bears on the future of humanity. As global peace and development faces various challenges, all countries should commit to a vision of common, comprehensive, cooperative, and sustainable security, and put equal emphasis on development and security. Countries should build consensus through dialogue and cooperation, and develop open, fair, and efficient governing mechanisms, in a bid to promote AI technologies to benefit humanity and contribute to building a community with a shared future for mankind. We call on all countries to enhance information exchange and technological cooperation on the governance of AI. We should work together to prevent risks, and develop AI governance frameworks, norms and standards based on broad consensus, so as to make AI technologies more secure, reliable, controllable, and equitable. We welcome governments, international organizations, companies, research institutes, civil organizations, and individuals to jointly promote the governance of AI under the principles of extensive consultation, joint contribution, and shared benefits. To make this happen, we would like to suggest the following:

Published by Cyberspace Administration of China in Global AI Governance Initiative, October 18, 2023

4

Reaching adequate safety levels for advanced AI will also require immense research progress. Advanced AI systems must be demonstrably aligned with their designer’s intent, as well as appropriate norms and values. They must also be robust against both malicious actors and rare failure modes. Sufficient human control needs to be ensured for these systems. Concerted effort by the global research community in both AI and other disciplines is essential; we need a global network of dedicated AI safety research and governance institutions. We call on leading AI developers to make a minimum spending commitment of one third of their AI R&D on AI safety and for government agencies to fund academic and non profit AI safety and governance research in at least the same proportion.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Oxford, Oct 31, 2023

Conclusion

Decisive action is required to avoid catastrophic global outcomes from AI. The combination of concerted technical research efforts with a prudent international governance regime could mitigate most of the risks from AI, enabling the many potential benefits. International scientific and government collaboration on safety must continue and grow.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Beijing, May 10, 2024

· Consensus Statement on AI Safety as a Global Public Good

Rapid advances in artificial intelligence (AI) systems’ capabilities are pushing humanity closer to a world where AI meets and surpasses human intelligence. Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently. Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity. Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence. The global nature of these risks from AI makes it necessary to recognize AI safety as a global public good, and work towards global governance of these risks. Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time. Promising initial steps by the international community show cooperation on AI safety and governance is achievable despite geopolitical tensions. States and AI developers around the world committed to foundational principles to foster responsible development of AI and minimize risks at two intergovernmental summits. Thanks to these summits, states established AI Safety Institutes or similar institutions to advance testing, research and standards setting. These efforts are laudable and must continue. States must sufficiently resource AI Safety Institutes, continue to convene summits and support other global governance efforts. However, states must go further than they do today. As an initial step, states should develop authorities to detect and respond to AI incidents and catastrophic risks within their jurisdictions. These domestic authorities should coordinate to develop a global contingency plan to respond to severe AI incidents and catastrophic risks. In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks. Deep and foundational research needs to be conducted to guarantee the safety of advanced AI systems. This work must begin swiftly to ensure they are developed and validated prior to the advent of advanced AIs. To enable this, we call on states to carve out AI safety as a cooperative area of academic and technical activity, distinct from broader geostrategic competition on development of AI capabilities. The international community should consider setting up three clear processes to prepare for a world where advanced AI systems pose catastrophic risks:

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024