4. The Principle of Justice: “Be Fair”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.
1. Principle 1 — Human Rights
Issue: How can we ensure that A IS do not infringe upon human rights?
To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A IS should not be granted rights and privileges equal to human rights: A IS should always be subordinate to human judgment and control.
How should AI research and its applications, at the institutional level, be controlled?
In what areas would this be most pertinent?
Who should decide, and according to which modalities, the norms and moral values determining this control?
Who should establish ethical guidelines for self driving cars?
Should ethical labeling that respects certain standards be developed for AI, websites and businesses?
The development of AI should promote informed participation in public life, cooperation and democratic debate.
1. We are driven by our values
We recognize that, like with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing. In developing AI software we will remain true to our Human Rights Commitment Statement, the UN Guiding Principles on Business and Human Rights, laws, and widely accepted international norms. Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavor to prevent the inappropriate use of our technology.
9. Ban the Attribution of Responsibility to Robots
Robots should be designed and operated as far as is practicable to comply with existing laws, fundamental rights and freedoms, including privacy. This is linked to the question of legal responsibility. In line with Bryson et al 2011, UNI Global Union asserts that legal responsibility for a robot should be attributed to a person. Robots are not responsible parties under the law.