· 4. (Humans will need) Judgment and accountability

We may be willing to accept a computer generated diagnosis or legal decision, but we will still expect a human to be ultimately accountable for the outcomes.
Principle: 10 AI rules, Jun 28, 2016

Published by Satya Nadella, CEO of Microsoft

Related Principles

Proportionality and harmlessness.

It should be recognised that AI technologies do not necessarily, in and of themselves, guarantee the prosperity of humans or the environment and ecosystems. In the event that any harm to humans may occur, risk assessment procedures should be applied and measures taken to prevent such harm from occurring. In other words, for a human person to be legally responsible for the decisions he or she makes to carry out one or more actions, there must be discernment (full human mental faculties), intention (human drive or desire) and freedom (to act in a calculated and premeditated manner). Therefore, to avoid falling into anthropomorphisms that could hinder eventual regulations and or wrong attributions, it is important to establish the conception of artificial intelligences as artifices, that is, as technology, a thing, an artificial means to achieve human objectives but which should not be confused with a human person. That is, the algorithm can execute, but the decision must necessarily fall on the person and therefore, so must the responsibility. Consequently, it emerges that an algorithm does not possess self determination and or agency to make decisions freely (although many times in colloquial language the concept of "decision" is used to describe a classification executed by an algorithm after training), and therefore it cannot be held responsible for the actions that are executed through said algorithm in question.

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

9 RESPONSIBILITY PRINCIPLE

The development and use of AIS must not contribute to lessen the responsibility of human beings when decisions must be made. 1) Only human beings can be held responsible for decisions stemming from recommendations made by AIS, and the actions that proceed therefrom. 2) In all areas where a decision that affects a person’s life, quality of life, or reputation must be made, where time and circumstance permit, the final decision must be taken by a human being and that decision should be free and informed 3) The decision to kill must always be made by human beings, and responsibility for this decision must not be transferred to an AIS. 4) People who authorize AIS to commit a crime or an offence, or demonstrate negligence by allowing AIS to commit them, are responsible for this crime or offence. 5) When damage or harm has been inflicted by an AIS, and the AIS is proven to be reliable and to have been used as intended, it is not reasonable to place blame on the people involved in its development or use.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

2. AI must be held to account – and so must users

Users build a relationship with AI and start to trust it after just a few meaningful interactions. With trust, comes responsibility and AI needs to be held accountable for its actions and decisions, just like humans. Technology should not be allowed to become too clever to be accountable. We don’t accept this kind of behaviour from other ‘expert’ professions, so why should technology be the exception.

Published by Sage in The Ethics of Code: Developing AI for Business with Five Core Principles, Jun 27, 2017

1 Protect autonomy

Adoption of AI can lead to situations in which decision making could be or is in fact transferred to machines. The principle of autonomy requires that any extension of machine autonomy not undermine human autonomy. In the context of health care, this means that humans should remain in full control of health care systems and medical decisions. AI systems should be designed demonstrably and systematically to conform to the principles and human rights with which they cohere; more specifically, they should be designed to assist humans, whether they be medical providers or patients, in making informed decisions. Human oversight may depend on the risks associated with an AI system but should always be meaningful and should thus include effective, transparent monitoring of human values and moral considerations. In practice, this could include deciding whether to use an AI system for a particular health care decision, to vary the level of human discretion and decision making and to develop AI technologies that can rank decisions when appropriate (as opposed to a single decision). These practicescan ensure a clinician can override decisions made by AI systems and that machine autonomy can be restricted and made “intrinsically reversible”. Respect for autonomy also entails the related duties to protect privacy and confidentiality and to ensure informed, valid consent by adopting appropriate legal frameworks for data protection. These should be fully supported and enforced by governments and respected by companies and their system designers, programmers, database creators and others. AI technologies should not be used for experimentation or manipulation of humans in a health care system without valid informed consent. The use of machine learning algorithms in diagnosis, prognosis and treatment plans should be incorporated into the process for informed and valid consent. Essential services should not be circumscribed or denied if an individual withholds consent and that additional incentives or inducements should not be offered by either a government or private parties to individuals who do provide consent. Data protection laws are one means of safeguarding individual rights and place obligations on data controllers and data processors. Such laws are necessary to protect privacy and the confidentiality of patient data and to establish patients’ control over their data. Construed broadly, data protection laws should also make it easy for people to access their own health data and to move or share those data as they like. Because machine learning requires large amounts of data – big data – these laws are increasingly important.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021