Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age

Principle: Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age, Oct 4, 2022

Published by OSTP

Related Principles

(f) Rule of law and accountability

Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy. The whole range of legal challenges arising in the field should be addressed with timely investment in the development of robust solutions that provide a fair and clear allocation of responsibilities and efficient mechanisms of binding law. In this regard, governments and international organisations ought to increase their efforts in clarifying with whom liabilities lie for damages caused by undesired behaviour of ‘autonomous’ systems. Moreover, effective harm mitigation systems should be in place.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

a) The protection of human rights and liberties

The protection of human rights and liberties: ensuring the protection of the human rights and liberties guaranteed by Russian and international laws, including the right to work, and affording individuals the opportunity to obtain the knowledge and acquire the skills needed in order to successfully adapt to the conditions of a digital economy;

Published by Office of the President of the Russian Federation, Decree of the President of the Russian Federation on the Development of Artificial Intelligence in the Russian Federation in Basic Principles of the Development and Use of Artificial Intelligence Technologies, Oct 10, 2019

· 1. THE MAIN PRIORITY OF THE DEVELOPMENT OF AI TECHNOLOGIES IS PROTECTING THE INTERESTS AND RIGHTS OF HUMAN BEINGS COLLECTIVELY AND AS INDIVIDUALS

1.1. Human centered and humanistic approach. In the development of AI technologies, the rights and freedoms of the individual should be given the greatest value. AI technologies developed by AI Actors should promote or not hinder the realization of humans’ capabilities to achieve harmony in social, economic and spiritual spheres, as well as in the highest self fulfillment of human beings. They should take into account key values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples and ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors, which are listed in the section 2 of this Code. 1.2. Respect for human autonomy and freedom of will. AI Actors should take all necessary measures to preserve the autonomy and free will of a human‘s decision making ability, the right to choose, and, in general, the intellectual abilities of a human as an intrinsic value and a system forming factor of modern civilization. AI Actors should, during AIS creation, assess the possible negative consequences for the development of human cognitive abilities and prevent the development of AIS that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the legislation of the Russian Federation in all areas of their activities and at all stages of the creation, development and use of AI technologies, including in matters of the legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data concerning individuals or groups do not intentionally discriminate. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life. (At the same time, cannot be considered as discrimination rules, which are explicitly declared by an AI Actor for functioning or the application of AIS for the different groups of users, with such factors taken into account for segmentation) 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to assess the potential risks of using an AIS, including the social consequences for individuals, society and the state, as well as the humanitarian impact of the AIS on human rights and freedoms at different stages, including during the formation and use of datasets. AI Actors should also carry out long term monitoring of the manifestations of such risks and take into account the complexity of the behavior of AIS during risk assessment, including the relationship and the interdependence of processes in the AIS’s life cycle. For critical applications of the AIS, in special cases, it is encouraged that a risk assessment be conducted through the involvement of a neutral third party or authorized official body when to do so would not harm the performance and information security of the AIS and would ensure the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

Preamble: Our intent for the ethical use of AI in Defence

The MOD is committed to developing and deploying AI enabled systems responsibly, in ways that build trust and consensus, setting international standards for the ethical use of AI for Defence. The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values. The MOD’s existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law, act as a foundation for Defence’s development, deployment and operation of AI enabled systems. These ethical principles do not affect or supersede existing legal obligations. Instead, they set out an ethical framework which will guide Defence’s approach to adopting AI, in line with rigorous existing codes of conduct and regulations. These principles are applicable across the full spectrum of use cases for AI in Defence, from battlespace to back office, and across the entire lifecycle of these systems.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022