3. Design for all
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
Systems should be designed in a way that allows all citizens to use the products or services, regardless of their age, disability status or social status. It is particularly important to consider accessibility to AI products and services to people with disabilities, which are horizontal category of society, present in all societal groups independent from gender, age or nationality. AI applications should hence not have a one size fits all approach, but be user centric and consider the whole range of human abilities, skills and requirements. Design for all implies the accessibility and usability of technologies by anyone at any place and at any time, ensuring their inclusion in any living context, thus enabling equitable access and active participation of potentially all people in existing and emerging computer mediated human activities. This requirement links to the United Nations Convention on the Rights of Persons with Disabilities.
2 RESPECT FOR AUTONOMY PRINCIPLE
AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
1) AIS must allow individuals to fulﬁll their own moral objectives and their conception of a life worth living.
2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms.
3) Public institutions must not use AIS to promote or discredit a particular conception of the good life.
4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking.
5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination.
6) The development of AIS must avoid creating dependencies through attention capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans.
3 PROTECTION OF PRIVACY AND INTIMACY PRINCIPLE
Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS).
1) Personal spaces in which people are not subjected to surveillance or digital evaluation must be protected from the intrusion of AIS and data acquisition and archiving systems (DAAS).
2) The intimacy of thoughts and emotions must be strictly protected from AIS and DAAS uses capable of causing harm, especially uses that impose moral judgments on people or their lifestyle choices.
3) People must always have the right to digital disconnection in their private lives, and AIS should explicitly offer the option to disconnect at regular intervals, without encouraging people to stay connected.
4) People must have extensive control over information regarding their preferences. AIS must not create individual preference proﬁles to inﬂuence the behavior of the individuals without their free and informed consent.
5) DAAS must guarantee data conﬁdentiality and personal proﬁle anonymity.
6) Every person must be able to exercise extensive control over their personal data, especially when it comes to its collection, use, and dissemination. Access to AIS and digital services by individuals must not be made conditional on their abandoning control or ownership of their personal data.
7) Individuals should be free to donate their personal data to research organizations in order to contribute to the advancement of knowledge.
8) The integrity of one’s personal identity must be guaranteed. AIS must not be used to imitate or alter a person’s appearance, voice, or other individual characteristics in order to damage one’s reputation or manipulate other people.
5 DEMOCRATIC PARTICIPATION PRINCIPLE
AIS must meet intelligibility, justiﬁability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.
1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.
2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justiﬁable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justiﬁcation consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justiﬁcation we would demand of a human making the same kind of decision.
3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for veriﬁcation and control purposes.
4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation.
5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused.
6) For public AIS that have a signiﬁcant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use.
7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for.
8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS.
9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person.
10) Artiﬁcial intelligence research should remain open and accessible to all.
6 EQUITY PRINCIPLE
The development and use of AIS must contribute to the creation of a just and equitable society.
1) AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on — among other things — social, sexual, ethnic, cultural, or religious differences.
2) AIS development must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge.
3) AIS development must produce social and economic beneﬁts for all by reducing social inequalities and vulnerabilities.
4) Industrial AIS development must be compatible with acceptable working conditions at every step of their life cycle, from natural resources extraction to recycling, and including data processing.
5) The digital activity of users of AIS and digital services should be recognized as labor that contributes to the functioning of algorithms and creates value.
6) Access to fundamental resources, knowledge and digital tools must be guaranteed for all.
7) We should support the development of commons algorithms — and of open data needed to train them — and expand their use, as a socially equitable objective.