· 5) Race Avoidance

Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

· 3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· (6) Safety

Artificial Intelligence should be with concrete design to avoid known and potential safety issues (for themselves, other AI, and human) with different levels of risks.

Published by HAIP Initiative in Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

3. Principle 3 — Accountability

Issue: How can we assure that designers, manufacturers, owners, and operators of A IS are responsible and accountable? [Candidate Recommendations] To best address issues of responsibility and accountability: 1. Legislatures courts should clarify issues of responsibility, culpability, liability, and accountability for A IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations). 2. Designers and developers of A IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A IS. 3. Multi stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A IS oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.). 4. Systems for registration and record keeping should be created so that it is always possible to find out who is legally responsible for a particular A IS. Manufacturers operators owners of A IS should register key, high level parameters, including: • Intended use • Training data training environment (if applicable) • Sensors real world data sources • Algorithms • Process graphs • Model features (at various levels) • User interfaces • Actuators outputs • Optimization goal loss function reward function

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

· AI systems should not be able to autonomously hurt, destroy or deceive humans

1. AI systems should be built to serve and inform, and not to deceive and manipulate 2. Nations should collaborate to avoid an arms race in lethal autonomous weapons, and such weapons should be tightly controlled 3. Active cooperation should be pursued to avoid corner cutting on safety standards 4. Systems designed to inform significant decisions should do so impartially

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019

9. Safety and Security

Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Agencies should give additional consideration to methods for guaranteeing systemic resilience, and for preventing bad actors from exploiting AI system weaknesses, including cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology. When evaluating or introducing AI policies, agencies should be mindful of any potential safety and security risks, as well as the risk of possible malicious deployment and use of AI applications.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020