2. AI systems should be designed with respect for the rule of law, human rights, democratic values and diversity, and should include appropriate mechanisms – such as allowing human intervention where necessary – to ensure a fair and equitable society.

Principle: Recommendations for the treatment of personal data derived from the use of Artificial Intelligence, May 8, 2022

Published by National Institute of Transparency, Access to Information and Protection of Personal Data (INAI)

Related Principles

Human centred values

Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals. This principle aims to ensure that AI systems are aligned with human values. Machines should serve humans, and not the other way around. AI systems should enable an equitable and democratic society by respecting, protecting and promoting human rights, enabling diversity, respecting human freedom and the autonomy of individuals, and protecting the environment. Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights. It’s permissible to interfere with certain human rights where it’s reasonable, necessary and proportionate. All people interacting with AI systems should be able to keep full and effective control over themselves. AI systems should not undermine the democratic process, and should not undertake actions that threaten individual autonomy, like deception, unfair manipulation, unjustified surveillance, and failing to maintain alignment between a disclosed purpose and true action. AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills. Organisations designing, developing, deploying or operating AI systems should ideally hire staff from diverse backgrounds, cultures and disciplines to ensure a wide range of perspectives, and to minimise the risk of missing important considerations only noticeable by some stakeholders.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

(f) Rule of law and accountability

Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy. The whole range of legal challenges arising in the field should be addressed with timely investment in the development of robust solutions that provide a fair and clear allocation of responsibilities and efficient mechanisms of binding law. In this regard, governments and international organisations ought to increase their efforts in clarifying with whom liabilities lie for damages caused by undesired behaviour of ‘autonomous’ systems. Moreover, effective harm mitigation systems should be in place.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

· 1.2. Human centered values and fairness

a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

1. Principle 1 — Human Rights

Issue: How can we ensure that A IS do not infringe upon human rights? [Candidate Recommendations] To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans: 1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A IS. 2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks. 3. For the foreseeable future, A IS should not be granted rights and privileges equal to human rights: A IS should always be subordinate to human judgment and control.

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

· 1.2. Human centred values and fairness

a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights. b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

Published by The Organisation for Economic Co-operation and Development (OECD) in OECD Principles on Artificial Intelligence, May 22, 2019