· (9) Responsibility for Human

AI need to keep human safe, on the basis that this safety consideration do not directly and indirectly harm human society. AI need to help Human for transformation to future human becoming.
Principle: Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

Published by HAIP Initiative

Related Principles

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

· (14) Privacy for AI

Human need to respect the privacy of AI, on the basis that AI does not bring any actual challenge for human safety. AI is obliged to uncover necessary private details to keep safe interactions with humanity.

Published by HAIP Initiative in Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

· 2. The Principle of Non maleficence: “Do no Harm”

AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism. Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other). Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· 3. The Principle of Autonomy: “Preserve Human Agency”

Autonomy of human beings in the context of AI development means freedom from subordination to, or coercion by, AI systems. Human beings interacting with AI systems must keep full and effective self determination over themselves. If one is a consumer or user of an AI system this entails a right to decide to be subject to direct or indirect AI decision making, a right to knowledge of direct or indirect interaction with AI systems, a right to opt out and a right of withdrawal. Self determination in many instances requires assistance from government or non governmental organizations to ensure that individuals or minorities are afforded similar opportunities as the status quo. Furthermore, to ensure human agency, systems should be in place to ensure responsibility and accountability. It is paramount that AI does not undermine the necessity for human responsibility to ensure the protection of fundamental rights.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

2. Autonomy

[QUESTIONS] How can AI contribute to greater autonomy for human beings? Must we fight against the phenomenon of attention seeking which has accompanied advances in AI? Should we be worried that humans prefer the company of AI to that of other humans or animals? Can someone give informed consent when faced with increasingly complex autonomous technologies? Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision? [PRINCIPLES] ​The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017