· (19) Coordination

When conflicts emerged from interactions among Human and AI, benefits for humanity and benefits for AI should be actively coordinated based on Empathy and Altruism.
Principle: Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

Published by HAIP Initiative

Related Principles

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

· 3. The Principle of Autonomy: “Preserve Human Agency”

Autonomy of human beings in the context of AI development means freedom from subordination to, or coercion by, AI systems. Human beings interacting with AI systems must keep full and effective self determination over themselves. If one is a consumer or user of an AI system this entails a right to decide to be subject to direct or indirect AI decision making, a right to knowledge of direct or indirect interaction with AI systems, a right to opt out and a right of withdrawal. Self determination in many instances requires assistance from government or non governmental organizations to ensure that individuals or minorities are afforded similar opportunities as the status quo. Furthermore, to ensure human agency, systems should be in place to ensure responsibility and accountability. It is paramount that AI does not undermine the necessity for human responsibility to ensure the protection of fundamental rights.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· 4. The Principle of Justice: “Be Fair”

For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

7. Principles of human dignity and individual autonomy

Users should respect human dignity and individual autonomy in the utilization of AI systems or AI services. [Main points to discuss] A) Respect for human dignity and individual autonomy With consideration of social contexts in the utilization of AI, users may be expected to respect human dignity and individual autonomy. B) Attention to the manipulation of human decision making, emotions, etc. by AI Users may be expected to pay attention to the risks of the manipulation of human decision making and emotions by AI and risks of excessive dependence on AI. It is crucial to consider who takes what measures against such risks. C) Reference to the discussion of bioethics, etc. in the case of linking AI systems with the human brain and body When linking AI with the human brain and body, users may be required to particularly take into consideration that human dignity and individual autonomy will not be violated, in light of discussions on bioethics, etc.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

4 SOLIDARITY PRINCIPLE

The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations. 1) AIS must not threaten the preservation of fulfilling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation. 2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans. 3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships. 4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff. 5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior. 6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018