2. The Principle of Non maleficence: “Do no Harm”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism.
Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other).
Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.
1 WELL BEING PRINCIPLE
The development and use of artiﬁcial intelligence systems (AIS) must permit the growth of the well being of all sentient beings.
1) AIS must help individuals improve their living conditions, their health, and their working conditions.
2) AIS must allow individuals to pursue their preferences, so long as they do not cause harm to other sentient beings.
3) AIS must allow people to exercise their mental and physical capacities.
4) AIS must not become a source of ill being, unless it allows us to achieve a superior well being than what one could attain otherwise.
5) AIS use should not contribute to increasing stress, anxiety, or a sense of being harassed by one’s digital environment.
2 RESPECT FOR AUTONOMY PRINCIPLE
AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
1) AIS must allow individuals to fulﬁll their own moral objectives and their conception of a life worth living.
2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms.
3) Public institutions must not use AIS to promote or discredit a particular conception of the good life.
4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking.
5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination.
6) The development of AIS must avoid creating dependencies through attention capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans.
6 EQUITY PRINCIPLE
The development and use of AIS must contribute to the creation of a just and equitable society.
1) AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on — among other things — social, sexual, ethnic, cultural, or religious differences.
2) AIS development must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge.
3) AIS development must produce social and economic beneﬁts for all by reducing social inequalities and vulnerabilities.
4) Industrial AIS development must be compatible with acceptable working conditions at every step of their life cycle, from natural resources extraction to recycling, and including data processing.
5) The digital activity of users of AIS and digital services should be recognized as labor that contributes to the functioning of algorithms and creates value.
6) Access to fundamental resources, knowledge and digital tools must be guaranteed for all.
7) We should support the development of commons algorithms — and of open data needed to train them — and expand their use, as a socially equitable objective.
7 DIVERSITY INCLUSION PRINCIPLE
The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences.
1) AIS development and use must not lead to the homogenization of society through the standardization of behaviours and opinions.
2) From the moment algorithms are conceived, AIS development and deployment must take into consideration the multitude of expressions of social and cultural diversity present in the society.
3) AI development environments, whether in research or industry, must be inclusive and reﬂect the diversity of the individuals and groups of the society.
4) AIS must avoid using acquired data to lock individuals into a user proﬁle, ﬁx their personal identity, or conﬁne them to a ﬁltering bubble, which would restrict and conﬁne their possibilities for personal development — especially in ﬁelds such as education, justice, or business.
5) AIS must not be developed or used with the aim of limiting the free expression of ideas or the opportunity to hear diverse opinions, both of which being essential conditions of a democratic society.
6) For each service category, the AIS offering must be diversiﬁed to prevent de facto monopolies from forming and undermining individual freedoms.