Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g. race, sex, etc).
Principle: Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)

Related Principles

· 2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

3. Justice

[QUESTIONS] How do we ensure that the benefits of AI are available to everyone? Must we fight against the concentration of power and wealth in the hands of a small number of AI companies? What types of discrimination could AI create or exacerbate? Should the development of AI be neutral or should it seek to reduce social and economic inequalities? What types of legal decisions can we delegate to AI? [PRINCIPLES] ​The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental physical abilities, sexual orientation, ethnic social origins and religious beliefs.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

5. Fairness

a. Ensure that algorithmic decisions do not create discriminatory or unjust impacts across different demographic lines (e.g. race, sex, etc.). b. To develop and include monitoring and accounting mechanisms to avoid unintentional discrimination when implementing decision making systems. c. To consult a diversity of voices and demographics when developing systems, applications and algorithms.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

1. Fair AI

We seek to ensure that the applications of AI technology lead to fair results. This means that they should not lead to discriminatory impacts on people in relation to race, ethnic origin, religion, gender, sexual orientation, disability or any other personal condition. We will apply technology to minimize the likelihood that the training data sets we use create or reinforce unfair bias or discrimination. When optimizing a machine learning algorithm for accuracy in terms of false positives and negatives, we will consider the impact of the algorithm in the specific domain.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018

4. Fairness Obligation.

Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions. [Explanatory Memorandum] The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair. There is no simple answer to the question as to what is unfair or impermissible. The evaluation often depends on context. But the Fairness Obligation makes clear that an assessment of objective outcomes alone is not sufficient to evaluate an AI system. Normative consequences must be assessed, including those that preexist or may be amplified by an AI system.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018