· 5) Race Avoidance

Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

· 3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· (6) Safety

Artificial Intelligence should be with concrete design to avoid known and potential safety issues (for themselves, other AI, and human) with different levels of risks.

Published by HAIP Initiative in Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

· Emergency Preparedness Agreements and Institutions

States should agree on technical and institutional measures required to prepare for advanced AI systems, regardless of their development timescale. To facilitate these agreements, we need an international body to bring together AI safety authorities, fostering dialogue and collaboration in the development and auditing of AI safety regulations across different jurisdictions. This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires. Over time, this body could also set standards for and commit to using verification methods to enforce domestic implementations of the Safety Assurance Framework. These methods can be mutually enforced through incentives and penalty mechanisms, such as conditioning access to markets on compliance with global standards. Experts and safety authorities should establish incident reporting and contingency plans, and regularly update the list of verified practices to reflect current scientific understanding. This body will be a critical initial coordination mechanism. In the long run, however, states will need to go further to ensure truly global governance of risks from advanced AI.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024

5 Safety and Reliability

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall adopt design regimes and standards ensuring high safety and reliability of AI systems on one hand while limiting the exposure of developers and deployers on the other hand.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

· AI systems should not be able to autonomously hurt, destroy or deceive humans

1. AI systems should be built to serve and inform, and not to deceive and manipulate 2. Nations should collaborate to avoid an arms race in lethal autonomous weapons, and such weapons should be tightly controlled 3. Active cooperation should be pursued to avoid corner cutting on safety standards 4. Systems designed to inform significant decisions should do so impartially

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019