A. Lawfulness:

AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.
Principle: NATO Principles of Responsible Use of Artificial Intelligence in Defence, Oct 22, 2021

Published by The North Atlantic Treaty Organization (NATO)

Related Principles

Right to privacy and data protection.

It is important that data for AI systems is collected, used, shared, archived and deleted in a manner consistent with international law and in accordance with the values and these principles stated, while respecting relevant national, regional and international legal frameworks.

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023

Universal Guidelines for AI

These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. The Principles

Published by Center for AI and Digital Policy in Universal Guidelines for AI, Oct, 2018

(a) Human rights:

AI should be developed and implemented in accordance with international human rights standards.

Published by The Extended Working Group on Ethics of Artificial Intelligence (AI) of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), UNESCO in Suggested generic principles for the development, implementation and use of AI, Mar 21, 2019

(f) Rule of law and accountability

Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy. The whole range of legal challenges arising in the field should be addressed with timely investment in the development of robust solutions that provide a fair and clear allocation of responsibilities and efficient mechanisms of binding law. In this regard, governments and international organisations ought to increase their efforts in clarifying with whom liabilities lie for damages caused by undesired behaviour of ‘autonomous’ systems. Moreover, effective harm mitigation systems should be in place.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

Preamble: Our intent for the ethical use of AI in Defence

The MOD is committed to developing and deploying AI enabled systems responsibly, in ways that build trust and consensus, setting international standards for the ethical use of AI for Defence. The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values. The MOD’s existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law, act as a foundation for Defence’s development, deployment and operation of AI enabled systems. These ethical principles do not affect or supersede existing legal obligations. Instead, they set out an ethical framework which will guide Defence’s approach to adopting AI, in line with rigorous existing codes of conduct and regulations. These principles are applicable across the full spectrum of use cases for AI in Defence, from battlespace to back office, and across the entire lifecycle of these systems.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022