· We will govern AI as a global effort

1. Global cooperation should be used to ensure the safe governance of AI 2. Government will support the establishment of internationally recognised standards and best practices in AI, and when they are established, shall adhere to them
Principle: Dubai's AI Principles, Jan 08, 2019

Published by Smart Dubai

Related Principles

· 2.5. International co operation for trustworthy AI

a) Governments, including developing countries and with stakeholders, should actively cooperate to advance these principles and to progress on responsible stewardship of trustworthy AI. b) Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, crosssectoral and open multi stakeholder initiatives to garner long term expertise on AI. c) Governments should promote the development of multi stakeholder, consensus driven global technical standards for interoperable and trustworthy AI. d) Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

1. Artificial intelligence should be developed for the common good and benefit of humanity.

The UK must seek to actively shape AI's development and utilisation, or risk passively acquiescing to its many likely consequences. A shared ethical AI framework is needed to give clarity as to how AI can best be used to benefit individuals and society. By establishing these principles, the UK can lead by example in the international community. We recommend that the Government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.

Published by House of Lords of United Kingdom, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

4

Reaching adequate safety levels for advanced AI will also require immense research progress. Advanced AI systems must be demonstrably aligned with their designer’s intent, as well as appropriate norms and values. They must also be robust against both malicious actors and rare failure modes. Sufficient human control needs to be ensured for these systems. Concerted effort by the global research community in both AI and other disciplines is essential; we need a global network of dedicated AI safety research and governance institutions. We call on leading AI developers to make a minimum spending commitment of one third of their AI R&D on AI safety and for government agencies to fund academic and non profit AI safety and governance research in at least the same proportion.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Oxford, Oct 31, 2023

· Consensus Statement on AI Safety as a Global Public Good

Rapid advances in artificial intelligence (AI) systems’ capabilities are pushing humanity closer to a world where AI meets and surpasses human intelligence. Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently. Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity. Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence. The global nature of these risks from AI makes it necessary to recognize AI safety as a global public good, and work towards global governance of these risks. Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time. Promising initial steps by the international community show cooperation on AI safety and governance is achievable despite geopolitical tensions. States and AI developers around the world committed to foundational principles to foster responsible development of AI and minimize risks at two intergovernmental summits. Thanks to these summits, states established AI Safety Institutes or similar institutions to advance testing, research and standards setting. These efforts are laudable and must continue. States must sufficiently resource AI Safety Institutes, continue to convene summits and support other global governance efforts. However, states must go further than they do today. As an initial step, states should develop authorities to detect and respond to AI incidents and catastrophic risks within their jurisdictions. These domestic authorities should coordinate to develop a global contingency plan to respond to severe AI incidents and catastrophic risks. In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks. Deep and foundational research needs to be conducted to guarantee the safety of advanced AI systems. This work must begin swiftly to ensure they are developed and validated prior to the advent of advanced AIs. To enable this, we call on states to carve out AI safety as a cooperative area of academic and technical activity, distinct from broader geostrategic competition on development of AI capabilities. The international community should consider setting up three clear processes to prepare for a world where advanced AI systems pose catastrophic risks:

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024

· 2.5. International co operation for trustworthy AI

a) Governments, including developing countries and with stakeholders, should actively cooperate to advance these principles and to progress on responsible stewardship of trustworthy AI. b) Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, crosssectoral and open multi stakeholder initiatives to garner long term expertise on AI. c) Governments should promote the development of multi stakeholder, consensus driven global technical standards for interoperable and trustworthy AI. d) Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.

Published by The Organisation for Economic Co-operation and Development (OECD) in OECD Principles on Artificial Intelligence, May 22, 2019