1 WELL BEING PRINCIPLE
Publisher: University of Montreal
The development and use of artiﬁcial intelligence systems (AIS) must permit the growth of the well being of all sentient beings.
1) AIS must help individuals improve their living conditions, their health, and their working conditions.
2) AIS must allow individuals to pursue their preferences, so long as they do not cause harm to other sentient beings.
3) AIS must allow people to exercise their mental and physical capacities.
4) AIS must not become a source of ill being, unless it allows us to achieve a superior well being than what one could attain otherwise.
5) AIS use should not contribute to increasing stress, anxiety, or a sense of being harassed by one’s digital environment.
2. The Principle of Non maleficence: “Do no Harm”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism.
Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other).
Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.
2 RESPECT FOR AUTONOMY PRINCIPLE
AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
1) AIS must allow individuals to fulﬁll their own moral objectives and their conception of a life worth living.
2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms.
3) Public institutions must not use AIS to promote or discredit a particular conception of the good life.
4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking.
5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination.
6) The development of AIS must avoid creating dependencies through attention capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans.
3 PROTECTION OF PRIVACY AND INTIMACY PRINCIPLE
Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS).
1) Personal spaces in which people are not subjected to surveillance or digital evaluation must be protected from the intrusion of AIS and data acquisition and archiving systems (DAAS).
2) The intimacy of thoughts and emotions must be strictly protected from AIS and DAAS uses capable of causing harm, especially uses that impose moral judgments on people or their lifestyle choices.
3) People must always have the right to digital disconnection in their private lives, and AIS should explicitly offer the option to disconnect at regular intervals, without encouraging people to stay connected.
4) People must have extensive control over information regarding their preferences. AIS must not create individual preference proﬁles to inﬂuence the behavior of the individuals without their free and informed consent.
5) DAAS must guarantee data conﬁdentiality and personal proﬁle anonymity.
6) Every person must be able to exercise extensive control over their personal data, especially when it comes to its collection, use, and dissemination. Access to AIS and digital services by individuals must not be made conditional on their abandoning control or ownership of their personal data.
7) Individuals should be free to donate their personal data to research organizations in order to contribute to the advancement of knowledge.
8) The integrity of one’s personal identity must be guaranteed. AIS must not be used to imitate or alter a person’s appearance, voice, or other individual characteristics in order to damage one’s reputation or manipulate other people.
4 SOLIDARITY PRINCIPLE
The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations.
1) AIS must not threaten the preservation of fulﬁlling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation.
2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans.
3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships.
4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff.
5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior.
6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.
5 DEMOCRATIC PARTICIPATION PRINCIPLE
AIS must meet intelligibility, justiﬁability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.
1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.
2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justiﬁable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justiﬁcation consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justiﬁcation we would demand of a human making the same kind of decision.
3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for veriﬁcation and control purposes.
4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation.
5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused.
6) For public AIS that have a signiﬁcant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use.
7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for.
8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS.
9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person.
10) Artiﬁcial intelligence research should remain open and accessible to all.