I. Human agency and oversight
AI systems should support individuals in making better, more informed choices in accordance with their goals. They should act as enablers to a flourishing and equitable society by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. The overall wellbeing of the user should be central to the system's functionality.
Human oversight helps ensuring that an AI system does not undermine human autonomy or causes other adverse effects. Depending on the specific AI based system and its application area, the appropriate degrees of control measures, including the adaptability, accuracy and explainability of AI based systems, should be ensured. Oversight may be achieved through governance mechanisms such as ensuring a human in the loop, human on the loop, or human in command approach. It must be ensured that public authorities have the ability to exercise their oversight powers in line with their mandates. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.
(a) Human dignity
The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ technologies. This means, for instance, that there are limits to determinations and classifications concerning persons, made on the basis of algorithms and ‘autonomous’ systems, especially when those affected by them are not informed about them. It also implies that there have to be (legal) limits to the ways in which people can be led to believe that they are dealing with human beings while in fact they are dealing with algorithms and smart machines. A relational conception of human dignity which is characterised by our social relations, requires that we are aware of whether and when we are interacting with a machine or another human being, and that we reserve the right to vest certain tasks to the human or the machine.
3. The Principle of Autonomy: “Preserve Human Agency”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
Autonomy of human beings in the context of AI development means freedom from subordination to, or coercion by, AI systems. Human beings interacting with AI systems must keep full and effective self determination over themselves. If one is a consumer or user of an AI system this entails a right to decide to be subject to direct or indirect AI decision making, a right to knowledge of direct or indirect interaction with AI systems, a right to opt out and a right of withdrawal.
Self determination in many instances requires assistance from government or non governmental organizations to ensure that individuals or minorities are afforded similar opportunities as the status quo. Furthermore, to ensure human agency, systems should be in place to ensure responsibility and accountability. It is paramount that AI does not undermine the necessity for human responsibility to ensure the protection of fundamental rights.
How can AI contribute to greater autonomy for human beings?
Must we fight against the phenomenon of attention seeking which has accompanied advances in AI?
Should we be worried that humans prefer the company of AI to that of other humans or animals?
Can someone give informed consent when faced with increasingly complex autonomous technologies?
Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision?
The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
5 DEMOCRATIC PARTICIPATION PRINCIPLE
AIS must meet intelligibility, justiﬁability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.
1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.
2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justiﬁable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justiﬁcation consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justiﬁcation we would demand of a human making the same kind of decision.
3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for veriﬁcation and control purposes.
4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation.
5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused.
6) For public AIS that have a signiﬁcant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use.
7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for.
8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS.
9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person.
10) Artiﬁcial intelligence research should remain open and accessible to all.