4. The Principle of Justice: “Be Fair”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.
1. Principle 1 — Human Rights
Issue: How can we ensure that A IS do not infringe upon human rights?
To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A IS should not be granted rights and privileges equal to human rights: A IS should always be subordinate to human judgment and control.
1.2. Human centred values and fairness
a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.
b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.
1. AI shall not impair, and, where possible, shall advance the equality in rights, dignity, and freedom to flourish of all humans.
Published by: The Future Society, Science, Law and Society (SLS) Initiative in Principles for the Governance of AI
Accordingly, the purpose of governing artificial intelligence is to develop policy frameworks, voluntary codes or practice, practical guidelines, national and international regulations, and ethical norms that safeguard and promote the equality in rights, dignity, and freedom to flourish of all humans.
3. Make AI Serve People and Planet
This includes codes of ethics for the development, application and use of AI so that throughout their entire operational process, AI systems remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights.
In addition, AI systems must protect and even improve our planet’s ecosystems and biodiversity.