(Preamble)

The Principles of Artificial Intelligence (AI) Ethics for the Intelligence Community (IC) are intended to guide personnel on whether and how to develop and use AI, to include machine learning, in furtherance of the IC’s mission. These Principles supplement the Principles of Professional Ethics for the IC and do not modify or supersede applicable laws, executive orders, or policies. Instead, they articulate the general norms that IC elements should follow in applying those authorities and requirements. To assist with the implementation of these Principles, the IC has also created an AI Ethics Framework to guide personnel who are determining whether and how to procure, design, build, use, protect, consume, and manage AI and other advanced analytics. The Intelligence Community commits to the design, development, and use of AI with the following principles:
Principle: Principles of Artificial Intelligence Ethics for the Intelligence Community, Jul 23, 2020

Published by Intelligence Community (IC), United States

Related Principles

(Preamble)

We reaffirm that the use of AI must take place within the context of the existing DoD ethical framework. Building on this foundation, we propose the following principles, which are more specific to AI, and note that they apply to both combat and non combat systems. AI is a rapidly developing field, and no organization that currently develops or fields AI systems or espouses AI ethics principles can claim to have solved all the challenges embedded in the following principles. However, the Department should set the goal that its use of AI systems is:

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States in AI Ethics Principles for DoD, Oct 31, 2019

· 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

2.1. Risk based approach. The level of attention to ethical issues in AI and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific technologies and AISs and the interests of individuals and society. Risk level assessment must take into account both the known and possible risks; in this case, the level of probability of threats should be taken into account as well as their possible scale in the short and long term. In the field of AI development, making decisions that are significant to society and the state should be accompanied by scientifically verified and interdisciplinary forecasting of socio economic consequences and risks, as well as by the examination of possible changes in the value and cultural paradigm of the development of society, while taking into account national priorities. In pursuance of this Code, the development and use of an AIS risk assessment methodology is recommended. 2.2. Responsible attitude. AI Actors should have a responsible approach to the aspects of AIS that influence society and citizens at every stage of the AIS life cycle. These include privacy; the ethical, safe and responsible use of personal data; the nature, degree and amount of damage that may follow as a result of the use of the technology and AIS; and the selection and use of companion hardware and software. In this case, the responsibility of the AI Actors must correspond to the nature, degree and amount of damage that may occur as a result of the use of technologies and AIS, while taking into account the role of the AI Actor in the life cycle of AIS, as well as the degree of possible and real impact of a particular AI Actor on causing damage, as well as its size. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, the occurrence of which the corresponding AI Actor can reasonably assume, measures should be taken to prevent or limit the occurrence of such consequences. To assess the moral acceptability of consequences and the possible measures to prevent them, Actors can use the provisions of this Code, including the mechanisms specified in Section 2. 2.4. No harm. AI Actors should not allow use of AI technologies for the purpose of causing harm to human life, the environment and or the health or property of citizens and legal entities. Any application of an AIS capable of purposefully causing harm to the environment, human life or health or the property of citizens and legal entities during any stage, including design, development, testing, implementation or operation, is unacceptable. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are informed of their interactions with the AIS when it affects their rights and critical areas of their lives and to ensure that such interactions can be terminated at the request of the user. 2.6. Data security AI Actors must comply with the legislation of the Russian Federation in the field of personal data and secrets protected by law when using an AIS. Furthermore, they must ensure the protection and protection of personal data processed by an AIS or AI Actors in order to develop and improve the AIS by developing and implementing innovative methods of controlling unauthorized access by third parties to personal data and using high quality and representative datasets from reliable sources and obtained without breaking the law. 2.7. Information security. AI Actors should provide the maximum possible protection against unauthorized interference in the work of the AI by third parties by introducing adequate information security technologies, including the use of internal mechanisms for protecting the AIS from unauthorized interventions and informing users and developers about such interventions. They must also inform users about the rules regarding information security when using the AIS. 2.8. Voluntary certification and Code compliance. AI Actors can implement voluntary certification for the compliance of the developed AI technologies with the standards established by the legislation of the Russian Federation and this Code. AI Actors can create voluntary certification and AIS labeling systems that indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AISs. AI Actors are encouraged to collaborate in the identification and verification of methods and forms of creating universal ("strong") AIS and the prevention of the possible threats that AIS carry. The use of "strong" AI technologies should be under the control of the state.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

(Preamble)

AI Ethics Code (hereinafter referred to as the Code) establishes general ethical principles and standards of conduct to be followed by those involved in activities in the field of artificial intelligence (hereinafter referred to as AI Actors) in their actions, as well as the mechanisms of implementation of Code’s provisions. The Code applies to relations that cover ethical aspects of the creation (design, construction, piloting), integration and use of AI technologies at all stages, which are currently not regulated by the national legislation and international rules and or by acts of technical regulation. The recommendations of this Code are designed for artificial intelligence systems (hereinafter referred to as AI systems) used exclusively for civil (nonmilitary) purposes. The provisions of the Code may be expanded and or specified for individual groups of AI Actors in sectorial or local documents on ethics in the field of AI considering the development of technologies, the specifics of the tasks being solved, the class and purpose of AI systems and the level of possible risks, as well as the specific context and environment in which AI systems are being used.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· 2. ACCESSION MECHANISM AND IMPLEMENTATION OF THE CODE

2.1 Voluntary Accession. Joining the Code is voluntary. By joining the Code, AI Actors voluntarily agree to follow its recommendations. Joining and following the provisions of this Code may be taken into account in case of support measures provision or in other interactions with an AI Actor or between AI Actors. 2.2 Ethics officers and or Ethics commissions. In order to ensure the implementation of the Code provisions and existing legal norms when creating, applying and using AI systems, AI Actors appoint officers on AI ethics who are responsible for the implementation of the Code and act as contact persons on AI ethics of the AI Actor, and or can create collegial sectorial bodies, namely, internal Ethics commissions in the field of AI, to consider the most relevant or controversial issues of AI ethics. AI Actors are encouraged to appoint an AI ethics officer preferably upon accession to this Code, or, alternatively, within two months since the date of accession to the Code. 2.3. Commission for AI Ethics Code. A Commission for AI Ethics Code (hereinafter referred to as the Commission) is established in order to fulfill the Code. The Commission may have working bodies and groups consisting of the representatives of the business community, science, government agencies and other interested organizations. The Commission considers applications made by AI Actors willing to join the Code, and maintains the Register of AI Actors who joined the Code. The functioning of the Commission and its secretariat is administered by the ________________ with the participation of other interested organizations. 2.4. Register of the Code participants. An AI Actor shall send a corresponding application to the Commission to join this Code. The Register of AI Actors who joined the Code is administered on a public website portal. 2.5. Development of methods and guidelines. It is encouraged for the implementation of the Code to develop methods, guidelines, checklists and other methodological materials that ensure more effective compliance with the provisions of the Code by AI Actors. 2.6. Set of Practices. In order to ensure timely exchange of best practices of useful and safe AI systems application built on the basic principles of this Code, increase the transparency of developers' activities and maintain healthy and fair competition on the AI systems market, AI Actors can create a set of best and or worst practical examples of how to solve emerging ethical issues in the AI life cycle and selected according to the criteria established by the professional community. Public access to this set of practices should be provided.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

Preamble: Our intent for the ethical use of AI in Defence

The MOD is committed to developing and deploying AI enabled systems responsibly, in ways that build trust and consensus, setting international standards for the ethical use of AI for Defence. The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values. The MOD’s existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law, act as a foundation for Defence’s development, deployment and operation of AI enabled systems. These ethical principles do not affect or supersede existing legal obligations. Instead, they set out an ethical framework which will guide Defence’s approach to adopting AI, in line with rigorous existing codes of conduct and regulations. These principles are applicable across the full spectrum of use cases for AI in Defence, from battlespace to back office, and across the entire lifecycle of these systems.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022