2. Right to Human Determination.

All individuals have the right to a final determination made by a person. [Explanatory Memorandum] The Right to a Human Determination reaffirms that individuals and not machines are responsible for automated decision making. In many instances, such as the operation of an autonomous vehicle, it would not be possible or practical to insert a human decision prior to an automated decision. But the aim remains to ensure accountability. Thus where an automated system fails, this principle should be understood as a requirement that a human assessment of the outcome be made.
Principle: Universal Guidelines for Artificial Intelligence, Oct 23, 2018

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC)

Related Principles

Proportionality and harmlessness.

It should be recognised that AI technologies do not necessarily, in and of themselves, guarantee the prosperity of humans or the environment and ecosystems. In the event that any harm to humans may occur, risk assessment procedures should be applied and measures taken to prevent such harm from occurring. In other words, for a human person to be legally responsible for the decisions he or she makes to carry out one or more actions, there must be discernment (full human mental faculties), intention (human drive or desire) and freedom (to act in a calculated and premeditated manner). Therefore, to avoid falling into anthropomorphisms that could hinder eventual regulations and or wrong attributions, it is important to establish the conception of artificial intelligences as artifices, that is, as technology, a thing, an artificial means to achieve human objectives but which should not be confused with a human person. That is, the algorithm can execute, but the decision must necessarily fall on the person and therefore, so must the responsibility. Consequently, it emerges that an algorithm does not possess self determination and or agency to make decisions freely (although many times in colloquial language the concept of "decision" is used to describe a classification executed by an algorithm after training), and therefore it cannot be held responsible for the actions that are executed through said algorithm in question.

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023

Transparency Principle

The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by Center for AI and Digital Policy in Universal Guidelines for AI, Oct, 2018

Human Determination

The Right to a Human Determination reaffirms that individuals and not machines are responsible for automated decision making. In many instances, such as the operation of an autonomous vehicle, it would not be possible or practical to insert a human decision prior to an automated decision. But the aim remains to ensure accountability. Thus where an automated system fails, this principle should be understood as a requirement that a human assessment of the outcome be made.

Published by Center for AI and Digital Policy in Universal Guidelines for AI, Oct, 2018

1. Right to Transparency.

All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. [Explanatory Memorandum] The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018

1. Demand That AI Systems Are Transparent

A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did. In particular: A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity. B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why. C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny. D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood. E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being. F. Workers must be consulted on AI systems’ implementation, development and deployment. G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making. The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed. See Principle 2 below for operational solution.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017