Use

The use of AI should observe the following principles:
Principle: Beijing AI Principles, May 25, 2019

Published by Beijing Academy of Artificial Intelligence (BAAI); Peking University; Tsinghua University; Institute of Automation, Chinese Academy of Sciences; Institute of Computing Technology, Chinese Academy of Sciences; Artifical Intelligence Industry Innovation Strategy Alliance (AITISA); etc.

Related Principles

(1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

(6) Fairness, Accountability, and Transparency

Under the "AI Ready society", when using AI, fair and transparent decision making and accountability for the results should be appropriately ensured, and trust in technology should be secured, in order that people using AI will not be discriminated on the ground of the person's background or treated unjustly in light of human dignity. Under the AI design concept, all people must be treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, sex, nationality, age, political beliefs, religion, etc. Appropriate explanations should be provided such as the fact that AI is being used, the method of obtaining and using the data used in AI, and the mechanism to ensure the appropriateness of the operation results of AI according to the situation AI is used. In order for people to understand and judge AI proposals, there should be appropriate opportunities for open dialogue on the use, adoption and operation of AI, as needed. In order to ensure the above viewpoints and to utilize AI safely in society, a mechanism must be established to secure trust in AI and its using data.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

(Preamble)

We reaffirm that the use of AI must take place within the context of the existing DoD ethical framework. Building on this foundation, we propose the following principles, which are more specific to AI, and note that they apply to both combat and non combat systems. AI is a rapidly developing field, and no organization that currently develops or fields AI systems or espouses AI ethics principles can claim to have solved all the challenges embedded in the following principles. However, the Department should set the goal that its use of AI systems is:

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States in AI Ethics Principles for DoD, Oct 31, 2019

2. Artificial intelligence should operate on principles of intelligibility and fairness.

Companies and organisations need to improve the intelligibility of their AI systems. Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society. To ensure that our use of AI does not inadvertently prejudice the treatment of particular groups in society, we call for the Government to incentivise the development of new approaches to the auditing of datasets used in AI, and to encourage greater diversity in the training and recruitment of AI specialists.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

1. Principle of proper utilization

Users should make efforts to utilize AI systems or AI services in a proper scope and manner, under the proper assignment of roles between humans and AI systems, or among users. [Main points to discuss] A) Utilization in the proper scope and manner On the basis of the provision of information and explanation from developers, etc. and with consideration of social contexts and circumstances, users may be expected to use AI in the proper scope and manner. In addition, users may be expected to recognize benefits and risks, understand proper uses, acquire necessary knowledge and skills and so on before using AI, according to the characteristics, usage situations, etc. of AI. Furthermore, users may be expected to check regularly whether they use AI in an appropriate scope and manner. B) Proper balance of benefits and risks of AI AI service providers and business users may be expected to take into consideration proper balance between benefits and risks of AI, including the consideration of the active use of AI for productivity and work efficiency improvements, after appropriately assessing risks of AI. C) Updates of AI software and inspections repairs, etc. of AI Through the process of utilization, users may be expected to make efforts to update AI software and perform inspections, repairs, etc. of AI in order to improve the function of AI and to mitigate risks. D) Human Intervention Regarding the judgment made by AI, in cases where it is necessary and possible (e.g., medical care using AI), humans may be expected to make decisions as to whether to use the judgments of AI, how to use it etc. In those cases, what can be considered as criteria for the necessity of human intervention? In the utilization of AI that operates through actuators, etc., in the case where it is planned to shift to human operation under certain conditions, what kind of matters are expected to be paid attention to? [Points of view as criteria (example)] • The nature of the rights and interests of indirect users, et al., and their intents, affected by the judgments of AI. • The degree of reliability of the judgment of AI (compared with reliability of human judgment). • Allowable time necessary for human judgment • Ability expected to be possessed by users E) Role assignments among users With consideration of the volume of capabilities and knowledge on AI that each user is expected to have and ease of implementing necessary measures, users may be expected to play such roles as seems to be appropriate and also to bear the responsibility. F) Cooperation among stakeholders Users and data providers may be expected to cooperate with stakeholders and to work on preventive or remedial measures (including information sharing, stopping and restoration of AI, elucidation of causes, measures to prevent recurrence, etc.) in accordance with the nature, conditions, etc. of damages caused by accidents, security breaches, privacy infringement, etc. that may occur in the future or have occurred through the use of AI. What is expected reasonable from a users point of view to ensure the above effectiveness?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018