· (20) Mutural Trust

Human and AI need to develop and raise the levels of trustworthiness between each other.
Principle: Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

Published by HAIP Initiative

Related Principles

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

4

Reaching adequate safety levels for advanced AI will also require immense research progress. Advanced AI systems must be demonstrably aligned with their designer’s intent, as well as appropriate norms and values. They must also be robust against both malicious actors and rare failure modes. Sufficient human control needs to be ensured for these systems. Concerted effort by the global research community in both AI and other disciplines is essential; we need a global network of dedicated AI safety research and governance institutions. We call on leading AI developers to make a minimum spending commitment of one third of their AI R&D on AI safety and for government agencies to fund academic and non profit AI safety and governance research in at least the same proportion.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Oxford, Oct 31, 2023

4 SOLIDARITY PRINCIPLE

The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations. 1) AIS must not threaten the preservation of fulfilling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation. 2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans. 3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships. 4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff. 5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior. 6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

· We will give AI systems human values and make them beneficial to society

1. Government will support theresearch of the beneficial use of AI 2. AI should be developed to align with human values and contribute to human flourishing 3. Stakeholders throughout society should be involved in the development of AI and its governance

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019

· ⑩ Transparency

In order to build social trust, efforts should be made, while taking into account possible conflicts with other principles, to improve the transparency and explainability of AI to a level suitable for the use cases of the AI system. When providing AI powered products or services, the AI provider should inform users in advance about what the AI does and what risks may arise during its use.

Published by The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI) in National AI Ethical Guidelines, Dec 23, 2020