Fairness and inclusion
AI systems should make the same recommendations for everyone with similar characteristics or qualifications. Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on. AI should adopt inclusive design efforts to anticipate any potential deployment issues that could unintentionally exclude people. Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities. Governments should review the impact of workplace, governmental and social AI on the opportunities and rights of poor people, Indigenous peoples and vulnerable members of society. In particular, the impact of overlapping AI systems toward profiling and marginalization should be identified and countered.
4. The Principle of Justice: “Be Fair”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.
3. Inclusion and Sharing
AI should promote green development to meet the requirements of environmental friendliness and resource conservation; AI should promote coordinated development by promoting the transformation and upgrading of all industries, and by narrowing regional disparities; AI should promote inclusive development through better education and training, support to the vulnerable groups to adapt, and efforts to eliminate digital divide; AI should promote shared development by avoiding data and platform monopolies, and promoting open and fair competition.
We will make AI systems fair
1. Data ingested should, where possible, be representative of the affected population
2. Algorithms should avoid non operational bias
3. Steps should be taken to mitigate and disclose the biases inherent in datasets
4. Significant decisions should be provably fair
We will promote human values, freedom and dignity
1. AI should improve society, and society should be consulted in a representative fashion to inform the development of AI
2. Humanity should retain the power to govern itself and make the final decision, with AI in an assisting role
3. AI systems should conform to international norms and standards with respect to human values and people rights and acceptable behaviour