7 DIVERSITY INCLUSION PRINCIPLE

The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences. 1) AIS development and use must not lead to the homogenization of society through the standardization of behaviours and opinions. 2) From the moment algorithms are conceived, AIS development and deployment must take into consideration the multitude of expressions of social and cultural diversity present in the society. 3) AI development environments, whether in research or industry, must be inclusive and reflect the diversity of the individuals and groups of the society. 4) AIS must avoid using acquired data to lock individuals into a user profile, fix their personal identity, or confine them to a filtering bubble, which would restrict and confine their possibilities for personal development — especially in fields such as education, justice, or business. 5) AIS must not be developed or used with the aim of limiting the free expression of ideas or the opportunity to hear diverse opinions, both of which being essential conditions of a democratic society. 6) For each service category, the AIS offering must be diversified to prevent de facto monopolies from forming and undermining individual freedoms.
Principle: The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

Published by University of Montreal

Related Principles

2 RESPECT FOR AUTONOMY PRINCIPLE

AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings. 1) AIS must allow individuals to fulfill their own moral objectives and their conception of a life worth living. 2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms. 3) Public institutions must not use AIS to promote or discredit a particular conception of the good life. 4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking. 5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination. 6) The development of AIS must avoid creating dependencies through attention capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

3 PROTECTION OF PRIVACY AND INTIMACY PRINCIPLE

Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS). 1) Personal spaces in which people are not subjected to surveillance or digital evaluation must be protected from the intrusion of AIS and data acquisition and archiving systems (DAAS). 2) The intimacy of thoughts and emotions must be strictly protected from AIS and DAAS uses capable of causing harm, especially uses that impose moral judgments on people or their lifestyle choices. 3) People must always have the right to digital disconnection in their private lives, and AIS should explicitly offer the option to disconnect at regular intervals, without encouraging people to stay connected. 4) People must have extensive control over information regarding their preferences. AIS must not create individual preference profiles to influence the behavior of the individuals without their free and informed consent. 5) DAAS must guarantee data confidentiality and personal profile anonymity. 6) Every person must be able to exercise extensive control over their personal data, especially when it comes to its collection, use, and dissemination. Access to AIS and digital services by individuals must not be made conditional on their abandoning control or ownership of their personal data. 7) Individuals should be free to donate their personal data to research organizations in order to contribute to the advancement of knowledge. 8) The integrity of one’s personal identity must be guaranteed. AIS must not be used to imitate or alter a person’s appearance, voice, or other individual characteristics in order to damage one’s reputation or manipulate other people.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

6 EQUITY PRINCIPLE

The development and use of AIS must contribute to the creation of a just and equitable society. 1) AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on — among other things — social, sexual, ethnic, cultural, or religious differences. 2) AIS development must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge. 3) AIS development must produce social and economic benefits for all by reducing social inequalities and vulnerabilities. 4) Industrial AIS development must be compatible with acceptable working conditions at every step of their life cycle, from natural resources extraction to recycling, and including data processing. 5) The digital activity of users of AIS and digital services should be recognized as labor that contributes to the functioning of algorithms and creates value. 6) Access to fundamental resources, knowledge and digital tools must be guaranteed for all. 7) We should support the development of commons algorithms — and of open data needed to train them — and expand their use, as a socially equitable objective.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

· 1. THE MAIN PRIORITY OF THE DEVELOPMENT OF AI TECHNOLOGIES IS PROTECTING THE INTERESTS AND RIGHTS OF HUMAN BEINGS COLLECTIVELY AND AS INDIVIDUALS

1.1. Human centered and humanistic approach. In the development of AI technologies, the rights and freedoms of the individual should be given the greatest value. AI technologies developed by AI Actors should promote or not hinder the realization of humans’ capabilities to achieve harmony in social, economic and spiritual spheres, as well as in the highest self fulfillment of human beings. They should take into account key values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples and ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors, which are listed in the section 2 of this Code. 1.2. Respect for human autonomy and freedom of will. AI Actors should take all necessary measures to preserve the autonomy and free will of a human‘s decision making ability, the right to choose, and, in general, the intellectual abilities of a human as an intrinsic value and a system forming factor of modern civilization. AI Actors should, during AIS creation, assess the possible negative consequences for the development of human cognitive abilities and prevent the development of AIS that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the legislation of the Russian Federation in all areas of their activities and at all stages of the creation, development and use of AI technologies, including in matters of the legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data concerning individuals or groups do not intentionally discriminate. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life. (At the same time, cannot be considered as discrimination rules, which are explicitly declared by an AI Actor for functioning or the application of AIS for the different groups of users, with such factors taken into account for segmentation) 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to assess the potential risks of using an AIS, including the social consequences for individuals, society and the state, as well as the humanitarian impact of the AIS on human rights and freedoms at different stages, including during the formation and use of datasets. AI Actors should also carry out long term monitoring of the manifestations of such risks and take into account the complexity of the behavior of AIS during risk assessment, including the relationship and the interdependence of processes in the AIS’s life cycle. For critical applications of the AIS, in special cases, it is encouraged that a risk assessment be conducted through the involvement of a neutral third party or authorized official body when to do so would not harm the performance and information security of the AIS and would ensure the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 1. THE KEY PRIORITY OF AI TECHNOLOGIES DEVELOPMENT IS PROTECTION OF THE INTERESTS AND RIGHTS OF HUMAN BEINGS AT LARGE AND EVERY PERSON IN PARTICULAR

1.1. Human centered and humanistic approach. Human rights and freedoms and the human as such must be treated as the greatest value in the process of AI technologies development. AI technologies developed by Actors should promote or not hinder the full realization of all human capabilities to achieve harmony in social, economic and spiritual spheres, as well as the highest self fulfillment of human beings. AI Actors should regard core values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples, ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors listed in Section 2 of this Code. 1.2. Recognition of autonomy and free will of human. AI Actors should take necessary measures to preserve the autonomy and free will of human in the process of decision making, their right to choose, as well as preserve human intellectual abilities in general as an intrinsic value and a system forming factor of modern civilization. AI Actors should forecast possible negative consequences for the development of human cognitive abilities at the earliest stages of AI systems creation and refrain from the development of AI systems that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the national legislation in all areas of their activities and at all stages of creation, integration and use of AI technologies, i.a. in the sphere of legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data that concern individuals or groups do not entail intentional discrimination. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination manifestations based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life (at the same time, the rules of functioning or application of AI systems for different groups of users wherein such factors are taken into account for user segmentation, which are explicitly declared by an AI Actor, cannot be defined as discrimination). 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to: • assess the potential risks of the use of an AI system, including social consequences for individuals, society and the state, as well as the humanitarian impact of an AI system on human rights and freedoms at different stages of its life cycle, i.a. during the formation and use of datasets; • monitor the manifestations of such risks in the long term; • take into account the complexity of AI systems’ actions, including interconnection and interdependence of processes in the AI systems’ life cycle, during risk assessment. In special cases concerning critical applications of an AI system it is encouraged that risk assessment be conducted with the involvement of a neutral third party or authorized official body given that it does not harm the performance and information security of the AI system and ensures the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)