2. Equitable

The department will take deliberate steps to minimize unintended bias in AI capabilities.
Principle: DoD's AI ethical principles, Feb 24, 2020

Published by Department of Defense (DoD), United States

Related Principles

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

2. Equitable.

DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non combat AI systems that would inadvertently cause harm to persons.

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States in AI Ethics Principles for DoD, Oct 31, 2019

· 9. Safety

Safety is about ensuring that the system will indeed do what it is supposed to do, without harming users (human physical integrity), resources or the environment. It includes minimizing unintended consequences and errors in the operation of the system. Processes to clarify and assess potential risks associated with the use of AI products and services should be put in place. Moreover, formal mechanisms are needed to measure and guide the adaptability of AI systems.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:

a. ensuring the respect of international legal instruments on human rights and non discrimination, b. investing in research into technical ways to identify, address and mitigate biases, c. taking reasonable steps to ensure the personal data and information used in automated decision making is accurate, up to date and as complete as possible, and d. elaborating specific guidance and principles in addressing biases and discrimination, and promoting individuals’ and stakeholders’ awareness.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

· We will make AI systems fair

1. Data ingested should, where possible, be representative of the affected population 2. Algorithms should avoid non operational bias 3. Steps should be taken to mitigate and disclose the biases inherent in datasets 4. Significant decisions should be provably fair

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019