Diversity

A.I. technology should be developed by inherently diverse teams.
Principle: Seeking Ground Rules for A.I.: The Recommendations, Mar 1, 2019

Published by New Work Summit, hosted by The New York Times

Related Principles

Principle 1 – Fairness

The fairness principle requires taking necessary actions to eliminate bias, discriminationor stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems. Bias may occur due to data, representation or algorithms and could lead to discrimination against the historically disadvantaged groups. When designing, selecting, and developing AI systems, it is essential to ensure just, fair, non biased, non discriminatory and objective standards that are inclusive, diverse, and representative of all or targeted segments of society. The functionality of an AI system should not be limited to a specific group based on gender, race, religion, disability, age, or sexual orientation. In addition, the potential risks, overall benefits, and purpose of utilizing sensitive personal data should be well motivated and defined or articulated by the AI System Owner. To ensure consistent AI systems that are based on fairness and inclusiveness, AI systems should be trained on data that are cleansed from bias and is representative of affected minority groups. Al algorithms should be built and developed in a manner that makes their composition free from bias and correlation fallacy.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Plan and Design:

The fairness principle requires taking necessary actions to eliminate bias, discrimination or stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems. Bias may occur due to data, representation or algorithms and could lead to discrimination against the historically disadvantaged groups. When designing, selecting, and developing AI systems, it is essential to ensure just, fair,non biased, non discriminatory and objective standards that are inclusive, diverse, andrepresentative of all or targeted segments of society. The functionality of an AI system shouldnot be limited to a specific group based on gender, race, religion, disability, age, or sexualorientation. In addition, the potential risks, overall benefits, and purpose of utilizing sensitivepersonal data should be well motivated and defined or articulated by the AI System Owner. To ensure consistent AI systems that are based on fairness and inclusiveness, AI systems shouldbe trained on data that are cleansed from bias and is representative of affected minority groups.Al algorithms should be built and developed in a manner that makes their composition free frombias and correlation fallacy.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· ③ Respect for Diversity

Throughout every stage of AI development and utilization, the diversity and representativeness of the AI users should be ensured, and bias and discrimination based on personal characteristics, such as gender, age, disability, region, race, religion, and nationality, should be minimized. Commercialized AI systems should be generally applicable to all individuals. The socially disadvantaged and vulnerable should be guaranteed access to AI technologies and services. Efforts should be made to ensure equal distribution of AI benefits to all people rather than to certain groups.

Published by The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI) in National AI Ethical Guidelines, Dec 23, 2020

5 Ensure inclusiveness and equity

Inclusiveness requires that AI used in health care is designed to encourage the widest possible appropriate, equitable use and access, irrespective of age, gender, income, ability or other characteristics. Institutions (e.g. companies, regulatory agencies, health systems) should hire employees from diverse backgrounds, cultures and disciplines to develop, monitor and deploy AI. AI technologies should be designed by and evaluated with the active participation of those who are required to use the system or will be affected by it, including providers and patients, and such participants should be sufficiently diverse. Participation can also be improved by adopting open source software or making source codes publicly available. AI technology – like any other technology – should be shared as widely as possible. AI technologies should be available not only in HIC and for use in contexts and for needs that apply to high income settings but they should also be adaptable to the types of devices, telecommunications infrastructure and data transfer capacity in LMIC. AI developers and vendors should also consider the diversity of languages, ability and forms of communication around the world to avoid barriers to use. Industry and governments should strive to ensure that the “digital divide” within and between countries is not widened and ensure equitable access to novel AI technologies. AI technologies should not be biased. Bias is a threat to inclusiveness and equity because it represents a departure, often arbitrary, from equal treatment. For example, a system designed to diagnose cancerous skin lesions that is trained with data on one skin colour may not generate accurate results for patients with a different skin colour, increasing the risk to their health. Unintended biases that may emerge with AI should be avoided or identified and mitigated. AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society. These parties also have a duty to address potential bias and avoid introducing or exacerbating health care disparities, including when testing or deploying new AI technologies in vulnerable populations. AI developers should ensure that AI data, and especially training data, do not include sampling bias and are therefore accurate, complete and diverse. If a particular racial or ethnic minority (or other group) is underrepresented in a dataset, oversampling of that group relative to its population size may be necessary to ensure that an AI technology achieves the same quality of results in that population as in better represented groups. AI technologies should minimize inevitable power disparities between providers and patients or between companies that create and deploy AI technologies and those that use or rely on them. Public sector agencies should have control over the data collectedby private health care providers, and their shared responsibilities should be defined and respected. Everyone – patients, health care providers and health care systems – should be able to benefit from an AI technology and not just the technology providers. AI technologies should be accompanied by means to provide patients with knowledge and skills to better understand their health status and to communicate effectively with health care providers. Future health literacy should include an element of information technology literacy. The effects of use of AI technologies must be monitored and evaluated, including disproportionate effects on specific groups of people when they mirror or exacerbate existing forms of bias and discrimination. Special provision should be made to protect the rights and welfare of vulnerable persons, with mechanisms for redress if such bias and discrimination emerges or is alleged.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021