6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:

a. ensuring the respect of international legal instruments on human rights and non discrimination, b. investing in research into technical ways to identify, address and mitigate biases, c. taking reasonable steps to ensure the personal data and information used in automated decision making is accurate, up to date and as complete as possible, and d. elaborating specific guidance and principles in addressing biases and discrimination, and promoting individuals’ and stakeholders’ awareness.
Principle: Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

Related Principles

V. Diversity, non discrimination and fairness

Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to (in)direct discrimination. Harm can also result from the intentional exploitation of (consumer) biases or by engaging in unfair competition. Moreover, the way in which AI systems are developed (e.g. the way in which the programming code of an algorithm is written) may also suffer from bias. Such concerns should be tackled from the beginning of the system’ development. Establishing diverse design teams and setting up mechanisms ensuring participation, in particular of citizens, in AI development can also help to address these concerns. It is advisable to consult stakeholders who may directly or indirectly be affected by the system throughout its life cycle. AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility through a universal design approach to strive to achieve equal access for persons with disabilities.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

· 5. Non Discrimination

Discrimination concerns the variability of AI results between individuals or groups of people based on the exploitation of differences in their characteristics that can be considered either intentionally or unintentionally (such as ethnicity, gender, sexual orientation or age), which may negatively impact such individuals or groups. Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups. Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons. Intentional harm can, for instance, be achieved by explicit manipulation of the data to exclude certain groups. Harm may also result from exploitation of consumer biases or unfair competition, such as homogenisation of prices by means of collusion or non transparent market. Discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models. Machine learning algorithms identify patterns or regularities in data, and will therefore also follow the patterns resulting from biased and or incomplete data sets. An incomplete data set may not reflect the target group it is intended to represent. While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias. Therefore, the upstream identification of possible bias, which later can be rectified, is important to build in to the development of AI. Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias. Accordingly, it can also assist us in making less biased decisions.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· Transparency and explainability

The transparency and explainability of AI systems are often essential preconditions to ensure the respect, protection and promotion of human rights, fundamental freedoms and ethical principles. Transparency is necessary for relevant national and international liability regimes to work effectively. A lack of transparency could also undermine the possibility of effectively challenging decisions based on outcomes produced by AI systems and may thereby infringe the right to a fair trial and effective remedy, and limits the areas in which these systems can be legally used. While efforts need to be made to increase transparency and explainability of AI systems, including those with extra territorial impact, throughout their life cycle to support democratic governance, the level of transparency and explainability should always be appropriate to the context and impact, as there may be a need to balance between transparency and explainability and other principles such as privacy, safety and security. People should be fully informed when a decision is informed by or is made on the basis of AI algorithms, including when it affects their safety or human rights, and in those circumstances should have the opportunity to request explanatory information from the relevant AI actor or public sector institutions. In addition, individuals should be able to access the reasons for a decision affecting their rights and freedoms, and have the option of making submissions to a designated staff member of the private sector company or public sector institution able to review and correct the decision. AI actors should inform users when a product or service is provided directly or with the assistance of AI systems in a proper and timely manner. From a socio technical lens, greater transparency contributes to more peaceful, just, democratic and inclusive societies. It allows for public scrutiny that can decrease corruption and discrimination, and can also help detect and prevent negative impacts on human rights. Transparency aims at providing appropriate information to the respective addressees to enable their understanding and foster trust. Specific to the AI system, transparency can enable people to understand how each stage of an AI system is put in place, appropriate to the context and sensitivity of the AI system. It may also include insight into factors that affect a specific prediction or decision, and whether or not appropriate assurances (such as safety or fairness measures) are in place. In cases of serious threats of adverse human rights impacts, transparency may also require the sharing of code or datasets. Explainability refers to making intelligible and providing insight into the outcome of AI systems. The explainability of AI systems also refers to the understandability of the input, output and the functioning of each algorithmic building block and how it contributes to the outcome of the systems. Thus, explainability is closely related to transparency, as outcomes and ub processes leading to outcomes should aim to be understandable and traceable, appropriate to the context. AI actors should commit to ensuring that the algorithms developed are explainable. In the case of AI applications that impact the end user in a way that is not temporary, easily reversible or otherwise low risk, it should be ensured that the meaningful explanation is provided with any decision that resulted in the action taken in order for the outcome to be considered transparent. Transparency and explainability relate closely to adequate responsibility and accountability measures, as well as to the trustworthiness of AI systems.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in Draft Text of The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021

7. Fairness and Non Discrimination

Agencies should consider in a transparent manner the impacts that AI applications may have on discrimination. AI applications have the potential of reducing present day discrimination caused by human subjectivity. At the same time, applications can, in some instances, introduce real world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI. When considering regulations or non regulatory approaches related to AI applications, agencies should consider, in accordance with law, issues of fairness and non discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020

7. Fairness and Non Discrimination

Agencies should consider in a transparent manner the impacts that AI applications may have on discrimination. AI applications have the potential of reducing present day discrimination caused by human subjectivity. At the same time, applications can, in some instances, introduce real world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI. When considering regulations or non regulatory approaches related to AI applications, agencies should consider, in accordance with law, issues of fairness and non discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020