(Preamble)

AI Ethics Code (hereinafter referred to as the Code) establishes general ethical principles and standards of conduct to be followed by those involved in activities in the field of artificial intelligence (hereinafter referred to as AI Actors) in their actions, as well as the mechanisms of implementation of Code’s provisions. The Code applies to relations that cover ethical aspects of the creation (design, construction, piloting), integration and use of AI technologies at all stages, which are currently not regulated by the national legislation and international rules and or by acts of technical regulation. The recommendations of this Code are designed for artificial intelligence systems (hereinafter referred to as AI systems) used exclusively for civil (nonmilitary) purposes. The provisions of the Code may be expanded and or specified for individual groups of AI Actors in sectorial or local documents on ethics in the field of AI considering the development of technologies, the specifics of the tasks being solved, the class and purpose of AI systems and the level of possible risks, as well as the specific context and environment in which AI systems are being used.
Principle: AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

Published by AI Alliance Russia

Related Principles

(Preamble)

The Code of Ethics in the Field of Artificial Intelligence (hereinafter referred to as the Code) establishes the general ethical principles and standards of conduct that should be followed by participants in relation to the field of artificial intelligence (hereinafter referred to as AI Actors) in their activities, as well as the mechanisms for the implementation of the provisions of this Code. The Code applies to relationships related to the ethical aspects of the creation (design, construction, piloting), implementation and use of AI technologies at all stages that are currently not regulated by the legislation of the Russian Federation and or by acts of technical regulation. The recommendations of this Code are designed for artificial intelligence systems (hereinafter referred to as AIS) used exclusively for civil (not military) purposes. The provisions of the Code can be expanded and or specified for individual groups of AI Actors in industry specific or local documents on ethics in the field of AI, considering the development of technologies, the specifics of the tasks being solved, the class and purpose of the AIS and the level of possible risks, as well as the specific context and environment in which the AIS are being used.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 1. Foundation of the code action

1.1. Legal basis of the Code. The Code takes into account the legislation of the Russian Federation,the Constitution of the Russian Federation and other regulatory legal acts and strategic planning documents. These include the National Strategy for the Development of Artificial Intelligence, the National Security Strategy of the Russian Federation and the Concept for the Regulation of Artificial Intelligence and Robotics. The Code also considers international treaties and agreements ratified by the Russian Federation applicable to issues ensuring the rights and freedoms of citizens in the context of the use of information technologies. 1.2. Terminology. Terms and definitions in this Code are defined in accordance with applicable regulatory legal acts, strategic planning documents and technical regulation in the field of AI. 1.3. AI Actors. For the purposes of this Code, AI Actors is defined as persons, including foreign ones, participating in the life cycle of an AIS during its implementation in the territory of the Russian Federation or in relation to persons who are in the territory of the Russian Federation, including those involved in the provision of goods and services. Such persons include, but are not limited to, the following: developers who create, train, or test AI models systems and develop or implement such models systems, software and or hardware systems and take responsibility for their design; customers (individuals or organizations) receiving a product; or a service; data providers and persons involved in the formation of datasets for their use in AISs; experts who measure and or evaluate the parameters of the developed models systems; manufacturers engaged in the production of AIS; AIS operators who legally own the relevant systems, use them for their intended purpose and directly implement the solution to the problems that arise from using AIS; operators (individuals or organizations) carrying out the work of the AIS; persons with a regulatory impact in the field of AI, including the developers of regulatory and technical documents, manuals, various regulations, requirements, and standards in the field of AI; and other persons whose actions can affect the results of the actions of an AIS or persons who make decisions on the use of AIS.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. MECHANISM OF ACCESSION AND IMPLEMENTATION OF THE CODE

2.1 Voluntary Accession. Joining the Code is voluntary. By joining the Code, AI Actors agree to follow its recommendations. Joining and following the provisions of this Code may be taken into account when providing support measures or in interactions with an AI Actor or between AI Actors. 2.2 Ethics officers and or ethics commissions. To ensure the implementation of the provisions of this Code and the current legal norms when creating, applying and using an AIS, AI Actors appoint officers on AI ethics who are responsible for the implementation of the Code and who act as contacts for AI Actors on ethical issues involving AI. These officers can create collegial industry bodies in the form of internal ethics commissions in the field of AI to consider the most relevant or controversial issues in the field of AI ethics. AI Actors are encouraged to identify an AI ethics officer whenever possible upon accession to this Code or within two months from the date of accession to the Code. 2.3. Commission for the Implementation of the National Code in AI Ethics. In order to implement the Code, a commission for the implementation of the Code in the field of AI ethics (hereinafter referred to as the Commission) being established. The commission may have working bodies and groups consisting of representatives of the business community, science, government agencies and other stakeholders. The Commission considers the applications of AI Actors wishing to join the Code and follow its provisions; it also maintains a register of Code members. The activities of the Commission and the conduct of its secretariat are carried out by the Alliance for Artificial Intelligence association with the participation of other interested organizations. 2.4. Register of Code participants. To accede to this Code, the AI Actor sends a corresponding application to the Commission. The register of AI Actors who have joined the Code is maintained on a public website portal. 2.5. Development of methods and guidelines. For the implementation of the Code, it is recommended to develop methods, guidelines, checklists and other methodological materials to ensure the most effective observance of the provisions of the Code by the AI Actors 2.6. Code of Practice. For the timely exchange of best practices, the useful and safe application of AIS built on the basic principles of this Code, increasing the transparency of developers' activities, and maintaining healthy competition in the AIS market, AI Actors may create a set of best and or worst practices for solving emerging ethical issues in the AI life cycle, selected according to the criteria established by the professional community. Public access to this code of practice should be provided.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· 2. ACCESSION MECHANISM AND IMPLEMENTATION OF THE CODE

2.1 Voluntary Accession. Joining the Code is voluntary. By joining the Code, AI Actors voluntarily agree to follow its recommendations. Joining and following the provisions of this Code may be taken into account in case of support measures provision or in other interactions with an AI Actor or between AI Actors. 2.2 Ethics officers and or Ethics commissions. In order to ensure the implementation of the Code provisions and existing legal norms when creating, applying and using AI systems, AI Actors appoint officers on AI ethics who are responsible for the implementation of the Code and act as contact persons on AI ethics of the AI Actor, and or can create collegial sectorial bodies, namely, internal Ethics commissions in the field of AI, to consider the most relevant or controversial issues of AI ethics. AI Actors are encouraged to appoint an AI ethics officer preferably upon accession to this Code, or, alternatively, within two months since the date of accession to the Code. 2.3. Commission for AI Ethics Code. A Commission for AI Ethics Code (hereinafter referred to as the Commission) is established in order to fulfill the Code. The Commission may have working bodies and groups consisting of the representatives of the business community, science, government agencies and other interested organizations. The Commission considers applications made by AI Actors willing to join the Code, and maintains the Register of AI Actors who joined the Code. The functioning of the Commission and its secretariat is administered by the ________________ with the participation of other interested organizations. 2.4. Register of the Code participants. An AI Actor shall send a corresponding application to the Commission to join this Code. The Register of AI Actors who joined the Code is administered on a public website portal. 2.5. Development of methods and guidelines. It is encouraged for the implementation of the Code to develop methods, guidelines, checklists and other methodological materials that ensure more effective compliance with the provisions of the Code by AI Actors. 2.6. Set of Practices. In order to ensure timely exchange of best practices of useful and safe AI systems application built on the basic principles of this Code, increase the transparency of developers' activities and maintain healthy and fair competition on the AI systems market, AI Actors can create a set of best and or worst practical examples of how to solve emerging ethical issues in the AI life cycle and selected according to the criteria established by the professional community. Public access to this set of practices should be provided.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)