8 PRUDENCE PRINCIPLE

Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them. 1) It is necessary to develop mechanisms that consider the potential for the double use — beneficial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses. 2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm. 3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders. 4) The development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data. 5) The errors and flaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a significant danger to personal integrity and social organization.
Principle: The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

Published by University of Montreal

Related Principles

· 2. The Principle of Non maleficence: “Do no Harm”

AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism. Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other). Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

There is a significant risk that well intended AI research will be misused in ways which harm people. AI researchers and developers must consider the ethical implications of their work. The Cabinet Office's final Cyber Security & Technology Strategy must explicitly consider the risks of AI with respect to cyber security, and the Government should conduct further research as how to protect data sets from any attempts at data sabotage. The Government and Ofcom must commission research into the possible impact of AI on conventional and social media outlets, and investigate measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

5. Principle 5 — A IS Technology Misuse and Awareness of It

Issue: How can we extend the benefits and minimize the risks of A IS technology being misused? [Candidate Recommendations] Raise public awareness around the issues of potential A IS technology misuse in an informed and measured way by: 1. Providing ethics education and security awareness that sensitizes society to the potential risks of misuse of A IS (e.g., by providing “data privacy” warnings that some smart devices will collect their user’s personal data). 2. Delivering this education in scalable and effective ways, beginning with those having the greatest credibility and impact that also minimize generalized (e.g., non productive) fear about A IS (e.g., via credible research institutions or think tanks via social media such as Facebook or YouTube). 3. Educating government, lawmakers, and enforcement agencies surrounding these issues so citizens work collaboratively with them to avoid fear or confusion (e.g., in the same way police officers have given public safety lectures in schools for years; in the near future they could provide workshops on safe A IS).

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

6. Human Centricity and Well being

a. To aim for an equitable distribution of the benefits of data practices and avoid data practices that disproportionately disadvantage vulnerable groups. b. To aim to create the greatest possible benefit from the use of data and advanced modelling techniques. c. Engage in data practices that encourage the practice of virtues that contribute to human flourishing, human dignity and human autonomy. d. To give weight to the considered judgements of people or communities affected by data practices and to be aligned with the values and ethical principles of the people or communities affected. e. To make decisions that should cause no foreseeable harm to the individual, or should at least minimise such harm (in necessary circumstances, when weighed against the greater good). f. To allow users to maintain control over the data being used, the context such data is being used in and the ability to modify that use and context. g. To ensure that the overall well being of the user should be central to the AI system’s functionality.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020