5. Governable.

DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.
Principle: AI Ethics Principles for DoD, Oct 31, 2019

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States

Related Principles

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

E. Governability:

AI applications will be developed and used according to their intended functions and will allow for: appropriate human machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.

Published by The North Atlantic Treaty Organization (NATO) in NATO Principles of Responsible Use of Artificial Intelligence in Defence, Oct 22, 2021

Fourth principle: Bias and Harm Mitigation

Those responsible for AI enabled systems must proactively mitigate the risk of unexpected or unintended biases or harms resulting from these systems, whether through their original rollout, or as they learn, change or are redeployed. AI enabled systems offer significant benefits for Defence. However, the use of AI enabled systems may also cause harms (beyond those already accepted under existing ethical and legal frameworks) to those using them or affected by their deployment. These may range from harms caused by a lack of suitable privacy for personal data, to unintended military harms due to system unpredictability. Such harms may change over time as systems learn and evolve, or as they are deployed beyond their original setting. Of particular concern is the risk of discriminatory outcomes resulting from algorithmic bias or skewed data sets. Defence must ensure that its AI enabled systems do not result in unfair bias or discrimination, in line with the MOD’s ongoing strategies for diversity and inclusion. A principle of bias and harm mitigation requires the assessment and, wherever possible, the mitigation of these biases or harms. This includes addressing bias in algorithmic decision making, carefully curating and managing datasets, setting safeguards and performance thresholds throughout the system lifecycle, managing environmental effects, and applying strict development criteria for new systems, or existing systems being applied to a new context.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022

Do no harm

AI systems should not be used in ways that cause or exacerbate harm, whether individual or collective, and including harm to social, cultural, economic, natural, and political environments. All stages of an AI system lifecycle should operate in accordance with the purposes, principles and commitments of the Charter of the United Nations. All stages of an AI system lifecycle should be designed, developed, deployed and operated in ways that respect, protect and promote human rights and fundamental freedoms. The intended and unintended impact of AI systems, at any stage in their lifecycle, should be monitored in order to avoid causing or contributing to harm, including violations of human rights and fundamental freedoms.

Published by United Nations System Chief Executives Board for Coordination in Principles for the Ethical Use of Artificial Intelligence in the United Nations System, Sept 20, 2022

5. Governable

The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Published by Department of Defense (DoD), United States in DoD's AI ethical principles, Feb 24, 2020