· Precautionary principle

Ensure AGI ASI that may appear in the future serves the interests of humanity
Principle: "ARCC": An Ethical Framework for Artificial Intelligence, Sep 18, 2018

Published by Tencent Research Institute

Related Principles

Transparency Principle

The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by Center for AI and Digital Policy in Universal Guidelines for AI, Oct, 2018

7. Principle of ethics

Developers should respect human dignity and individual autonomy in R&D of AI systems. [Comment] It is encouraged that, when developing AI systems that link with the human brain and body, developers pay particularly due consideration to respecting human dignity and individual autonomy, in light of discussions on bioethics, etc. It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems. It is advisable that developers take precautions to ensure that AI systems do not unduly infringe the value of humanity, based on the International Human Rights Law and the International Humanitarian Law.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

· 1. THE MAIN PRIORITY OF THE DEVELOPMENT OF AI TECHNOLOGIES IS PROTECTING THE INTERESTS AND RIGHTS OF HUMAN BEINGS COLLECTIVELY AND AS INDIVIDUALS

1.1. Human centered and humanistic approach. In the development of AI technologies, the rights and freedoms of the individual should be given the greatest value. AI technologies developed by AI Actors should promote or not hinder the realization of humans’ capabilities to achieve harmony in social, economic and spiritual spheres, as well as in the highest self fulfillment of human beings. They should take into account key values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples and ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors, which are listed in the section 2 of this Code. 1.2. Respect for human autonomy and freedom of will. AI Actors should take all necessary measures to preserve the autonomy and free will of a human‘s decision making ability, the right to choose, and, in general, the intellectual abilities of a human as an intrinsic value and a system forming factor of modern civilization. AI Actors should, during AIS creation, assess the possible negative consequences for the development of human cognitive abilities and prevent the development of AIS that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the legislation of the Russian Federation in all areas of their activities and at all stages of the creation, development and use of AI technologies, including in matters of the legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data concerning individuals or groups do not intentionally discriminate. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life. (At the same time, cannot be considered as discrimination rules, which are explicitly declared by an AI Actor for functioning or the application of AIS for the different groups of users, with such factors taken into account for segmentation) 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to assess the potential risks of using an AIS, including the social consequences for individuals, society and the state, as well as the humanitarian impact of the AIS on human rights and freedoms at different stages, including during the formation and use of datasets. AI Actors should also carry out long term monitoring of the manifestations of such risks and take into account the complexity of the behavior of AIS during risk assessment, including the relationship and the interdependence of processes in the AIS’s life cycle. For critical applications of the AIS, in special cases, it is encouraged that a risk assessment be conducted through the involvement of a neutral third party or authorized official body when to do so would not harm the performance and information security of the AIS and would ensure the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

3.2 Dignity

It is the duty of all members of society to mutually respect and protect this right as one of the basic and inviolable rights of every human being. Every individual has the right to protect own dignity: violation or non respect of this right is sanctioned by law. Human dignity (further: dignity) should be understood as a starting principle (principle) that focuses on the preservation of human integrity. Based on that premise, the persons to whom these Guidelines refershould at all times, regardless of the stage in which the concrete artificial intelligence solution isIdevelopment, application or use), keep in mind the person and his integrity as a central concept. In thisregard, it is necessary to develop systems that, at every stage, make it imperative to respect theperson's personality, his freedom and autonomy. Respecting human personality means creating a system that will respect the cognitive, social andcultural characteristics of each individual. The artificial inteligence systems that are being developed must be in accordance with the above, therefore it is necessary to take care that they cannot in any waylead to the subordination of man to the functions of the system, as well as endangering his dignity andintegrity. In order to ensure respect for the principle of dignity, artificial intelligence systems must not be such that in the processes of work and application they grossly ignore the autonomy of human choice. The Constitution of the Republic of Serbia emphasizes that dignity "is inviolable and everyone is obliged to respectand protect it., Evervone has the right to free personal development, if it does not violate the rights of others guaranteed by the Constitution. The Convention on Human Rights states the following: "Human dignity (dignity) is not only a basic human right but also the foundation of human rights." Human dignity is inherent in every human being. In the Republic of Serbia, this term is regulated in the following ways: "The dignity of the person (honor, reputation, or piety) of the person to whom the information refers.'it is legally protected.” "Whoever abuses another or treats him in a way that offends a human being. dignity, shall be punished by imprisonment for up to one year. "Work in the public interest is any socially useful work that does not offend human dignity and is not done for the purpose of making a profit." This principle emphasizes that the integrity and dignity of all who may be affected by the Artificial inteligence Systemmust be taken care of at all times, As it is a general concept, to which life, in addition to the law, gives different sides although the essence is the same, it is appropriate to attach to the concept itself: honor, reputation, that is, piety.

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

· Proportionality and Do No Harm

25. It should be recognized that AI technologies do not necessarily, per se, ensure human and environmental and ecosystem flourishing. Furthermore, none of the processes related to the AI system life cycle shall exceed what is necessary to achieve legitimate aims or objectives and should be appropriate to the context. In the event of possible occurrence of any harm to human beings, human rights and fundamental freedoms, communities and society at large or the environment and ecosystems, the implementation of procedures for risk assessment and the adoption of measures in order to preclude the occurrence of such harm should be ensured. 26. The choice to use AI systems and which AI method to use should be justified in the following ways: (a) the AI method chosen should be appropriate and proportional to achieve a given legitimate aim; (b) the AI method chosen should not infringe upon the foundational values captured in this document, in particular, its use must not violate or abuse human rights; and (c) the AI method should be appropriate to the context and should be based on rigorous scientific foundations. In scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions, final human determination should apply. In particular, AI systems should not be used for social scoring or mass surveillance purposes.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021