Fairness and inclusion
AI systems should make the same recommendations for everyone with similar characteristics or qualifications. Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on. AI should adopt inclusive design efforts to anticipate any potential deployment issues that could unintentionally exclude people. Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities. Governments should review the impact of workplace, governmental and social AI on the opportunities and rights of poor people, Indigenous peoples and vulnerable members of society. In particular, the impact of overlapping AI systems toward profiling and marginalization should be identified and countered.
2. Avoid creating or reinforcing unfair bias.
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
How do we ensure that the benefits of AI are available to everyone?
Must we fight against the concentration of power and wealth in the hands of a small number of AI companies?
What types of discrimination could AI create or exacerbate?
Should the development of AI be neutral or should it seek to reduce social and economic inequalities?
What types of legal decisions can we delegate to AI?
The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental physical abilities, sexual orientation, ethnic social origins and religious beliefs.
1. Fair AI
We seek to ensure that the applications of AI technology lead to fair results. This means that they should not lead to discriminatory impacts on people in relation to race, ethnic origin, religion, gender, sexual orientation, disability or any other personal condition. We will apply technology to minimize the likelihood that the training data sets we use create or reinforce unfair bias or discrimination.
When optimizing a machine learning algorithm for accuracy in terms of false positives and negatives, we will consider the impact of the algorithm in the specific domain.
4. Fairness Obligation.
Published by: The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence
Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair. There is no simple answer to the question as to what is unfair or impermissible. The evaluation often depends on context. But the Fairness Obligation makes clear that an assessment of objective outcomes alone is not sufficient to evaluate an AI system. Normative consequences must be assessed, including those that preexist or may be amplified by an AI system.