Note:

1. For the definition of "children", it is recommended that reference be made to the United Nations Convention on the Rights of the Child (UNCRC) and the provisions of each country or region. 2. The description of values and children's rights mentioned here is partially based on the text of the UNCRC.
Principle: Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development.

Related Principles

· 2. Data Governance

The quality of the data sets used is paramount for the performance of the trained machine learning solutions. Even if the data is handled in a privacy preserving way, there are requirements that have to be fulfilled in order to have high quality AI. The datasets gathered inevitably contain biases, and one has to be able to prune these away before engaging in training. This may also be done in the training itself by requiring a symmetric behaviour over known issues in the training set. In addition, it must be ensured that the proper division of the data which is being set into training, as well as validation and testing of those sets, is carefully conducted in order to achieve a realistic picture of the performance of the AI system. It must particularly be ensured that anonymisation of the data is done in a way that enables the division of the data into sets to make sure that a certain data – for instance, images from same persons – do not end up into both the training and test sets, as this would disqualify the latter. The integrity of the data gathering has to be ensured. Feeding malicious data into the system may change the behaviour of the AI solutions. This is especially important for self learning systems. It is therefore advisable to always keep record of the data that is fed to the AI systems. When data is gathered from human behaviour, it may contain misjudgement, errors and mistakes. In large enough data sets these will be diluted since correct actions usually overrun the errors, yet a trace of thereof remains in the data. To trust the data gathering process, it must be ensured that such data will not be used against the individuals who provided the data. Instead, the findings of bias should be used to look forward and lead to better processes and instructions – improving our decisions making and strengthening our institutions.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

(Preamble)

The Code of Ethics in the Field of Artificial Intelligence (hereinafter referred to as the Code) establishes the general ethical principles and standards of conduct that should be followed by participants in relation to the field of artificial intelligence (hereinafter referred to as AI Actors) in their activities, as well as the mechanisms for the implementation of the provisions of this Code. The Code applies to relationships related to the ethical aspects of the creation (design, construction, piloting), implementation and use of AI technologies at all stages that are currently not regulated by the legislation of the Russian Federation and or by acts of technical regulation. The recommendations of this Code are designed for artificial intelligence systems (hereinafter referred to as AIS) used exclusively for civil (not military) purposes. The provisions of the Code can be expanded and or specified for individual groups of AI Actors in industry specific or local documents on ethics in the field of AI, considering the development of technologies, the specifics of the tasks being solved, the class and purpose of the AIS and the level of possible risks, as well as the specific context and environment in which the AIS are being used.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 3. HUMANS ARE ALWAYS RESPONSIBILITY FOR THE CONSEQUENCES OF THE APPLICATION OF AN AIS

3.1. Supervision. AI Actors should provide comprehensive human supervision of any AIS to the extent and manner depending on the purpose of the AIS, including, for example, recording significant human decisions at all stages of the AIS life cycle or making provisions for the registration of the work of the AIS. They should also ensure the transparency of AIS use, including the possibility of cancellation by a person and (or) the prevention of making socially and legally significant decisions and actions by the AIS at any stage in its life cycle, where reasonably applicable. 3.2. Responsibility. AI Actors should not allow the transfer of rights of responsible moral choice to the AIS or delegate responsibility for the consequences of the AIS’s decision making. A person (an individual or legal entity recognized as the subject of responsibility in accordance with the legislation in force of the Russian Federation) must always be responsible for the consequences of the work of the AI Actors are encouraged to take all measures to determine the responsibilities of specific participants in the life cycle of the AIS, taking into account each participant’s role and the specifics of each stage.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. MECHANISM OF ACCESSION AND IMPLEMENTATION OF THE CODE

2.1 Voluntary Accession. Joining the Code is voluntary. By joining the Code, AI Actors agree to follow its recommendations. Joining and following the provisions of this Code may be taken into account when providing support measures or in interactions with an AI Actor or between AI Actors. 2.2 Ethics officers and or ethics commissions. To ensure the implementation of the provisions of this Code and the current legal norms when creating, applying and using an AIS, AI Actors appoint officers on AI ethics who are responsible for the implementation of the Code and who act as contacts for AI Actors on ethical issues involving AI. These officers can create collegial industry bodies in the form of internal ethics commissions in the field of AI to consider the most relevant or controversial issues in the field of AI ethics. AI Actors are encouraged to identify an AI ethics officer whenever possible upon accession to this Code or within two months from the date of accession to the Code. 2.3. Commission for the Implementation of the National Code in AI Ethics. In order to implement the Code, a commission for the implementation of the Code in the field of AI ethics (hereinafter referred to as the Commission) being established. The commission may have working bodies and groups consisting of representatives of the business community, science, government agencies and other stakeholders. The Commission considers the applications of AI Actors wishing to join the Code and follow its provisions; it also maintains a register of Code members. The activities of the Commission and the conduct of its secretariat are carried out by the Alliance for Artificial Intelligence association with the participation of other interested organizations. 2.4. Register of Code participants. To accede to this Code, the AI Actor sends a corresponding application to the Commission. The register of AI Actors who have joined the Code is maintained on a public website portal. 2.5. Development of methods and guidelines. For the implementation of the Code, it is recommended to develop methods, guidelines, checklists and other methodological materials to ensure the most effective observance of the provisions of the Code by the AI Actors 2.6. Code of Practice. For the timely exchange of best practices, the useful and safe application of AIS built on the basic principles of this Code, increasing the transparency of developers' activities, and maintaining healthy competition in the AIS market, AI Actors may create a set of best and or worst practices for solving emerging ethical issues in the AI life cycle, selected according to the criteria established by the professional community. Public access to this code of practice should be provided.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

1. Right to Transparency.

All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. [Explanatory Memorandum] The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018