· (8) Bias on Human

AI cannot introduce bias to understand and interact with humanity, and should actively interact with human to remove generated potential bias.
Principle: Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

Published by HAIP Initiative

Related Principles

Human centred values

Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals. This principle aims to ensure that AI systems are aligned with human values. Machines should serve humans, and not the other way around. AI systems should enable an equitable and democratic society by respecting, protecting and promoting human rights, enabling diversity, respecting human freedom and the autonomy of individuals, and protecting the environment. Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights. It’s permissible to interfere with certain human rights where it’s reasonable, necessary and proportionate. All people interacting with AI systems should be able to keep full and effective control over themselves. AI systems should not undermine the democratic process, and should not undertake actions that threaten individual autonomy, like deception, unfair manipulation, unjustified surveillance, and failing to maintain alignment between a disclosed purpose and true action. AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills. Organisations designing, developing, deploying or operating AI systems should ideally hire staff from diverse backgrounds, cultures and disciplines to ensure a wide range of perspectives, and to minimise the risk of missing important considerations only noticeable by some stakeholders.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· 2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· (15) Bias on Machine

Without clear technical judgement, human cannot have bias on AI when human and AI shows similar risks.

Published by HAIP Initiative in Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

4 SOLIDARITY PRINCIPLE

The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations. 1) AIS must not threaten the preservation of fulfilling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation. 2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans. 3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships. 4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff. 5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior. 6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

Principle 3 – Humanity

The humanity principle highlights that AI systems should be built using an ethical methodology to be just and ethically permissible, based on intrinsic and fundamental human rights and cultural values to generate a beneficial impact on individual stakeholders and communities, in both the long and short term goals and objectives to be used for the good of humanity. Predictive models should not be designed to deceive, manipulate, or condition behavior that is not meant to empower, aid, or augment human skills but should adopt a more human centric design approach that allows for human choice and determination.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022