2. Principle of transparency

Developers should pay attention to the verifiability of inputs outputs of AI systems and the explainability of their judgments. [Comment] AI systems which are supposed to be subject to this principle are such ones that might affect the life, body, freedom, privacy, or property of users or third parties. It is desirable that developers pay attention to the verifiability of the inputs and outputs of AI systems as well as the explainability of the judgment of AI systems within a reasonable scope in light of the characteristics of the technologies to be adopted and their use, so as to obtain the understanding and trust of the society including users of AI systems. [Note] Note that this principle is not intended to ask developers to disclose algorithms, source codes, or learning data. In interpreting this principle, consideration to privacy and trade secrets is also required.
Principle: AI R&D Principles, Jul 28, 2017

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Related Principles

IV. Transparency

The traceability of AI systems should be ensured; it is important to log and document both the decisions made by the systems, as well as the entire process (including a description of data gathering and labelling, and a description of the algorithm used) that yielded the decisions. Linked to this, explainability of the algorithmic decision making process, adapted to the persons involved, should be provided to the extent possible. Ongoing research to develop explainability mechanisms should be pursued. In addition, explanations of the degree to which an AI system influences and shapes the organisational decision making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency). Finally, it is important to adequately communicate the AI system’s capabilities and limitations to the different stakeholders involved in a manner appropriate to the use case at hand. Moreover, AI systems should be identifiable as such, ensuring that users know they are interacting with an AI system and which persons are responsible for it.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

7. Principle of ethics

Developers should respect human dignity and individual autonomy in R&D of AI systems. [Comment] It is encouraged that, when developing AI systems that link with the human brain and body, developers pay particularly due consideration to respecting human dignity and individual autonomy, in light of discussions on bioethics, etc. It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems. It is advisable that developers take precautions to ensure that AI systems do not unduly infringe the value of humanity, based on the International Human Rights Law and the International Humanitarian Law.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

8. Principle of fairness

AI service providers, business users, and data providers should take into consideration that individuals will not be discriminated unfairly by the judgments of AI systems or AI services. [Main points to discuss] A) Attention to the representativeness of data used for learning or other methods of AI AI service providers, business users, and data providers may be expected to pay attention to the representativeness of data used for learning or other methods of AI and the social bias inherent in the data so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc. as a result of the judgment of AI. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is attention expected to be paid to the representativeness of data used for learning or other methods and the social bias inherent in the data? Note: The representativeness of data refers to the fact that data sampled and used do not distort the propensity of the population of data. B) Attention to unfair discrimination by algorithm AI service providers and business users may be expected to pay attention to the possibility that individuals may be unfairly discriminated against due to their race, religion, gender, etc. by the algorithm of AI. C) Human intervention Regarding the judgment made by AI, AI service providers and business users may be expected to make judgments as to whether to use the judgments of AI, how to use it, or other matters, with consideration of social contexts and reasonable expectations of people in the utilization of AI, so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is human intervention expected?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

9. Principle of transparency

AI service providers and business users should pay attention to the verifiability of inputs outputs of AI systems or AI services and the explainability of their judgments. Note: This principle is not intended to ask for the disclosure of algorithm, source code, or learning data. In interpreting this principle, privacy of individuals and trade secrets of enterprises are also taken into account. [Main points to discuss] A) Recording and preserving the inputs outputs of AI In order to ensure the verifiability of the input and output of AI, AI service providers and business users may be expected to record and preserve the inputs and outputs. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent are the inputs and outputs expected to be recorded and preserved? For example, in the case of using AI in fields where AI systems might harm the life, body, or property, such as the field of autonomous driving, the inputs and outputs of AI may be expected to be recorded and preserved to the extent whch is necessary for investigating the causes of accidents and preventing the recurrence of such accidents. B) Ensuring explainability AI service providers and business users may be expected to ensure explainability on the judgments of AI. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is explainability expected to be ensured? Especially in the case of using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of medical care, personnel evaluation and recruitment and financing, explainability on the judgments of AI may be expected to be ensured. (For example, we have to pay attention to the current situation where deep learning has high prediction accuracy, but it is difficult to explain its judgment.)

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018