Fairness

AI systems should treat all people fairly.
Principle: Microsoft AI Principles, Jan 17, 2018 (unconfirmed)

Published by Microsoft

Related Principles

Fairness

All AI systems that process social or demographic data pertaining to features of human subjects must be designed to meet a minimum threshold of discriminatory non harm. This entails that the datasets they use be equitable; that their model architectures only include reasonable features, processes, and analytical structures; that they do not have inequitable impact; and that they are implemented in an unbiased way.

Published by The Alan Turing Institute in The FAST Track Principles, Jun 10, 2019

Human centred values

Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals. This principle aims to ensure that AI systems are aligned with human values. Machines should serve humans, and not the other way around. AI systems should enable an equitable and democratic society by respecting, protecting and promoting human rights, enabling diversity, respecting human freedom and the autonomy of individuals, and protecting the environment. Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights. It’s permissible to interfere with certain human rights where it’s reasonable, necessary and proportionate. All people interacting with AI systems should be able to keep full and effective control over themselves. AI systems should not undermine the democratic process, and should not undertake actions that threaten individual autonomy, like deception, unfair manipulation, unjustified surveillance, and failing to maintain alignment between a disclosed purpose and true action. AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills. Organisations designing, developing, deploying or operating AI systems should ideally hire staff from diverse backgrounds, cultures and disciplines to ensure a wide range of perspectives, and to minimise the risk of missing important considerations only noticeable by some stakeholders.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

Fairness

Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups. This principle aims to ensure that AI systems are fair and that they enable inclusion throughout their entire lifecycle. AI systems should be user centric and designed in a way that allows all people interacting with it to access the related products or services. This includes both appropriate consultation with stakeholders, who may be affected by the AI system throughout its lifecycle, and ensuring people receive equitable access and treatment. This is particularly important given concerns about the potential for AI to perpetuate societal injustices and have a disparate impact on vulnerable and underrepresented groups including, but not limited to, groups relating to age, disability, race, sex, intersex status, gender identity and sexual orientation. Measures should be taken to ensure the AI produced decisions are compliant with anti‐discrimination laws.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· ③ Respect for Diversity

Throughout every stage of AI development and utilization, the diversity and representativeness of the AI users should be ensured, and bias and discrimination based on personal characteristics, such as gender, age, disability, region, race, religion, and nationality, should be minimized. Commercialized AI systems should be generally applicable to all individuals. The socially disadvantaged and vulnerable should be guaranteed access to AI technologies and services. Efforts should be made to ensure equal distribution of AI benefits to all people rather than to certain groups.

Published by The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI) in National AI Ethical Guidelines, Dec 23, 2020