(a) Human dignity
(a) Human dignity
(a) Human dignity
The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ technologies.
(a) Human dignity
A relational conception of human dignity which is characterised by our social relations, requires that we are aware of whether and when we are interacting with a machine or another human being, and that we reserve the right to vest certain tasks to the human or the machine.
(b) Autonomy
The principle of autonomy implies the freedom of the human being.
(b) Autonomy
This translates into human responsibility and thus control over and knowledge about ‘autonomous’ systems as they must not impair freedom of human beings to set their own standards and norms and be able to live according to them.
(c) Responsibility
This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights.
(c) Responsibility
Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens.
(e) Democracy
The right to receive education or access information on new technologies and their ethical implications will facilitate that everyone understands risks and opportunities and is empowered to participate in decisional processes that crucially shape our future.
(e) Democracy
The principles of human dignity and autonomy centrally involve the human right to self determination through the means of democracy.
(e) Democracy
The principles of human dignity and autonomy centrally involve the human right to self determination through the means of democracy.
(e) Democracy
They must not be jeopardised, subverted or equalised by new technologies that inhibit or influence political decision making and infringe on the freedom of expression and the right to receive and impart information without interference.
(f) Rule of law and accountability
Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations.
(f) Rule of law and accountability
This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy.
(g) Security, safety, bodily and mental integrity
All dimensions of safety must be taken into account by AI developers and strictly tested before release in order to ensure that ‘autonomous’ systems do not infringe on the human right to bodily and mental integrity and a safe and secure environment.
(i) Sustainability
AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations.