9 RESPONSIBILITY PRINCIPLE

The development and use of AIS must not contribute to lessen the responsibility of human beings when decisions must be made. 1) Only human beings can be held responsible for decisions stemming from recommendations made by AIS, and the actions that proceed therefrom. 2) In all areas where a decision that affects a person’s life, quality of life, or reputation must be made, where time and circumstance permit, the final decision must be taken by a human being and that decision should be free and informed 3) The decision to kill must always be made by human beings, and responsibility for this decision must not be transferred to an AIS. 4) People who authorize AIS to commit a crime or an offence, or demonstrate negligence by allowing AIS to commit them, are responsible for this crime or offence. 5) When damage or harm has been inflicted by an AIS, and the AIS is proven to be reliable and to have been used as intended, it is not reasonable to place blame on the people involved in its development or use.
Principle: The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

Published by University of Montreal

Related Principles

Responsible Deployment

Principle: The capacity of an AI agent to act autonomously, and to adapt its behavior over time without human direction, calls for significant safety checks before deployment, and ongoing monitoring. Recommendations: Humans must be in control: Any autonomous system must allow for a human to interrupt an activity or shutdown the system (an “off switch”). There may also be a need to incorporate human checks on new decision making strategies in AI system design, especially where the risk to human life and safety is great. Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI agent’s safe interaction with its environment (digital or physical) and that it functions as intended. Autonomous systems should be monitored while in operation, and updated or corrected as needed. Privacy is key: AI systems must be data responsible. They should use only what they need and delete it when it is no longer needed (“data minimization”). They should encrypt data in transit and at rest, and restrict access to authorized persons (“access control”). AI systems should only collect, use, share and store data in accordance with privacy and personal data laws and best practices. Think before you act: Careful thought should be given to the instructions and data provided to AI systems. AI systems should not be trained with data that is biased, inaccurate, incomplete or misleading. If they are connected, they must be secured: AI systems that are connected to the Internet should be secured not only for their protection, but also to protect the Internet from malfunctioning or malware infected AI systems that could become the next generation of botnets. High standards of device, system and network security should be applied. Responsible disclosure: Security researchers acting in good faith should be able to responsibly test the security of AI systems without fear of prosecution or other legal action. At the same time, researchers and others who discover security vulnerabilities or other design flaws should responsibly disclose their findings to those who are in the best position to fix the problem.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

8 PRUDENCE PRINCIPLE

Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them. 1) It is necessary to develop mechanisms that consider the potential for the double use — beneficial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses. 2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm. 3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders. 4) The development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data. 5) The errors and flaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a significant danger to personal integrity and social organization.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

2. Right to Human Determination.

All individuals have the right to a final determination made by a person. [Explanatory Memorandum] The Right to a Human Determination reaffirms that individuals and not machines are responsible for automated decision making. In many instances, such as the operation of an autonomous vehicle, it would not be possible or practical to insert a human decision prior to an automated decision. But the aim remains to ensure accountability. Thus where an automated system fails, this principle should be understood as a requirement that a human assessment of the outcome be made.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018

4 Foster responsibility and accountability

Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance; this helps to ensure that health care providers can use an AI technology responsibly. Although AI technologies perform specific tasks, it is the responsibility of human stakeholders to ensure that they can perform those tasks and that they are used under appropriate conditions. Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. In human warranty, regulatory principles are applied upstream and downstream of the algorithm by establishing points of human supervision. The critical points of supervision are identified by discussions among professionals, patients and designers. The goal is to ensure that the algorithm remains on a machine learning development path that is medically effective, can be interrogated and is ethically responsible; it involves active partnership with patients and the public, such as meaningful public consultation and debate (101). Ultimately, such work should be validated by regulatory agencies or other supervisory authorities. When something does go wrong in application of an AI technology, there should be accountability. Appropriate mechanisms should be adopted to ensure questioning by and redress for individuals and groups adversely affected by algorithmically informed decisions. This should include access to prompt, effective remedies and redress from governments and companies that deploy AI technologies for health care. Redress should include compensation, rehabilitation, restitution, sanctions where necessary and a guarantee of non repetition. The use of AI technologies in medicine requires attribution of responsibility within complex systems in which responsibility is distributed among numerous agents. When medical decisions by AI technologies harm individuals, responsibility and accountability processes should clearly identify the relative roles of manufacturers and clinical users in the harm. This is an evolving challenge and remains unsettled in the laws of most countries. Institutions have not only legal liability but also a duty to assume responsibility for decisions made by the algorithms they use, even if it is not feasible to explain in detail how the algorithms produce their results. To avoid diffusion of responsibility, in which “everybody’s problem becomes nobody’s responsibility”, a faultless responsibility model (“collective responsibility”), in which all the agents involved in the development and deployment of an AI technology are held responsible, can encourage all actors to act with integrity and minimize harm. In such a model, the actual intentions of each agent (or actor) or their ability to control an outcome are not considered.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021