· (5) Fair Competition

A fair competitive environment must be maintained to create new businesses and services, to keep economic growth sustainable, and to solve social issues. There must not be unreasonable data collection or infringement of sovereignty under a dominant position of a particular country by concentrating AI resources. There must not be unreasonable data collection or unfair competition under a dominant position of a particular company by concentrating AI resources. By using AI, influence for wealth and society should not be overly biased on some stakeholders in the society.
Principle: Social Principles of Human-centric AI, Dec 27, 2018

Published by Cabinet Office, Government of Japan

Related Principles

· Fairness and inclusion

AI systems should make the same recommendations for everyone with similar characteristics or qualifications. Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on. AI should adopt inclusive design efforts to anticipate any potential deployment issues that could unintentionally exclude people. Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities. Governments should review the impact of workplace, governmental and social AI on the opportunities and rights of poor people, Indigenous peoples and vulnerable members of society. In particular, the impact of overlapping AI systems toward profiling and marginalization should be identified and countered.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

VI. Societal and environmental well being

For AI to be trustworthy, its impact on the environment and other sentient beings should be taken into account. Ideally, all humans, including future generations, should benefit from biodiversity and a habitable environment. Sustainability and ecological responsibility of AI systems should hence be encouraged. The same applies to AI solutions addressing areas of global concern, such as for instance the UN Sustainable Development Goals. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole. The use of AI systems should be given careful consideration particularly in situations relating to the democratic process, including opinion formation, political decision making or electoral contexts. Moreover, AI’s social impact should be considered. While AI systems can be used to enhance social skills, they can equally contribute to their deterioration.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

Chapter 4. The Norms of Supply

  14. Respect market rules. Strictly abide by the various rules and regulations for market access, competition, and trading activities, actively maintain market order, and create a market environment conducive to the development of AI. Data monopoly, platform monopoly, etc. must not be used to disrupt the orderly market competitions, and any means that infringe on the intellectual property rights of other subjects are forbidden.   15. Strengthen quality control. Strengthen the quality monitoring and the evaluations on the use of AI products and services, avoid infringements on personal safety, property safety, user privacy, etc. caused by product defects introduced during the design and development phases, and must not operate, sell, or provide products and services that do not meet the quality standards.   16. Protect the rights and interests of users. Users should be clearly informed that AI technology is used in products and services. The functions and limitations of AI products and services should be clearly identified, and users’ rights to know and to consent should be ensured. Simple and easy to understand solutions for users to choose to use or quit the AI mode should be provided, and it is forbidden to set obstacles for users to fairly use AI products and services.   17. Strengthen emergency protection. Emergency mechanisms and loss compensation plans and measures should be investigated and formulated. AI systems need to be timely monitored, user feedbacks should be responded and processed in a timely manner, systemic failures should be prevented in time, and be ready to assist relevant entities to intervene in the AI systems in accordance with laws and regulations to reduce losses and avoid risks.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

Principle 1 – Fairness

The fairness principle requires taking necessary actions to eliminate bias, discriminationor stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems. Bias may occur due to data, representation or algorithms and could lead to discrimination against the historically disadvantaged groups. When designing, selecting, and developing AI systems, it is essential to ensure just, fair, non biased, non discriminatory and objective standards that are inclusive, diverse, and representative of all or targeted segments of society. The functionality of an AI system should not be limited to a specific group based on gender, race, religion, disability, age, or sexual orientation. In addition, the potential risks, overall benefits, and purpose of utilizing sensitive personal data should be well motivated and defined or articulated by the AI System Owner. To ensure consistent AI systems that are based on fairness and inclusiveness, AI systems should be trained on data that are cleansed from bias and is representative of affected minority groups. Al algorithms should be built and developed in a manner that makes their composition free from bias and correlation fallacy.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

5 Ensure inclusiveness and equity

Inclusiveness requires that AI used in health care is designed to encourage the widest possible appropriate, equitable use and access, irrespective of age, gender, income, ability or other characteristics. Institutions (e.g. companies, regulatory agencies, health systems) should hire employees from diverse backgrounds, cultures and disciplines to develop, monitor and deploy AI. AI technologies should be designed by and evaluated with the active participation of those who are required to use the system or will be affected by it, including providers and patients, and such participants should be sufficiently diverse. Participation can also be improved by adopting open source software or making source codes publicly available. AI technology – like any other technology – should be shared as widely as possible. AI technologies should be available not only in HIC and for use in contexts and for needs that apply to high income settings but they should also be adaptable to the types of devices, telecommunications infrastructure and data transfer capacity in LMIC. AI developers and vendors should also consider the diversity of languages, ability and forms of communication around the world to avoid barriers to use. Industry and governments should strive to ensure that the “digital divide” within and between countries is not widened and ensure equitable access to novel AI technologies. AI technologies should not be biased. Bias is a threat to inclusiveness and equity because it represents a departure, often arbitrary, from equal treatment. For example, a system designed to diagnose cancerous skin lesions that is trained with data on one skin colour may not generate accurate results for patients with a different skin colour, increasing the risk to their health. Unintended biases that may emerge with AI should be avoided or identified and mitigated. AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society. These parties also have a duty to address potential bias and avoid introducing or exacerbating health care disparities, including when testing or deploying new AI technologies in vulnerable populations. AI developers should ensure that AI data, and especially training data, do not include sampling bias and are therefore accurate, complete and diverse. If a particular racial or ethnic minority (or other group) is underrepresented in a dataset, oversampling of that group relative to its population size may be necessary to ensure that an AI technology achieves the same quality of results in that population as in better represented groups. AI technologies should minimize inevitable power disparities between providers and patients or between companies that create and deploy AI technologies and those that use or rely on them. Public sector agencies should have control over the data collectedby private health care providers, and their shared responsibilities should be defined and respected. Everyone – patients, health care providers and health care systems – should be able to benefit from an AI technology and not just the technology providers. AI technologies should be accompanied by means to provide patients with knowledge and skills to better understand their health status and to communicate effectively with health care providers. Future health literacy should include an element of information technology literacy. The effects of use of AI technologies must be monitored and evaluated, including disproportionate effects on specific groups of people when they mirror or exacerbate existing forms of bias and discrimination. Special provision should be made to protect the rights and welfare of vulnerable persons, with mechanisms for redress if such bias and discrimination emerges or is alleged.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021