(e) Democracy
The principles of human dignity and autonomy centrally involve the human right to self determination through the means of democracy.
(f) Rule of law and accountability
Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations.
(f) Rule of law and accountability
This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy.
(g) Security, safety, bodily and mental integrity
All dimensions of safety must be taken into account by AI developers and strictly tested before release in order to ensure that ‘autonomous’ systems do not infringe on the human right to bodily and mental integrity and a safe and secure environment.
(i) Sustainability
AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations.