10.Principle of accountability

AI service providers and business users should make efforts to fulfill their accountability to the stakeholders including consumer users and indirect users. [Main points to discuss] A) Efforts to fulfill accountability In light of the characteristics of AI to be used and its purpose, etc., AI service providers and business users may be expected to make efforts to establish appropriate accountability to consumer users, indirect users, and third parties affected by the use of AI, to gain enough trust in AI from people and society. B) Notification and publication of usage policy on AI systems or AI services AI service providers and business users may be expected to notify or announce the usage policy on AI (the fact that they provide AI services, the scope and manner of proper AI utilization, the risks associated with the utilization, and the establishment of a consultation desk) in order to enable consumer users and indirect users to recognize properly the usage of AI. In light of the characteristics of the technologies to be used and their usage, we have to focus on which cases will lead to the usage policy is expected to be notified or announced as well as what content is expected to be included in the usage policy.
Principle: Draft AI Utilization Principles, Jul 17, 2018

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Related Principles

1. Principle of proper utilization

Users should make efforts to utilize AI systems or AI services in a proper scope and manner, under the proper assignment of roles between humans and AI systems, or among users. [Main points to discuss] A) Utilization in the proper scope and manner On the basis of the provision of information and explanation from developers, etc. and with consideration of social contexts and circumstances, users may be expected to use AI in the proper scope and manner. In addition, users may be expected to recognize benefits and risks, understand proper uses, acquire necessary knowledge and skills and so on before using AI, according to the characteristics, usage situations, etc. of AI. Furthermore, users may be expected to check regularly whether they use AI in an appropriate scope and manner. B) Proper balance of benefits and risks of AI AI service providers and business users may be expected to take into consideration proper balance between benefits and risks of AI, including the consideration of the active use of AI for productivity and work efficiency improvements, after appropriately assessing risks of AI. C) Updates of AI software and inspections repairs, etc. of AI Through the process of utilization, users may be expected to make efforts to update AI software and perform inspections, repairs, etc. of AI in order to improve the function of AI and to mitigate risks. D) Human Intervention Regarding the judgment made by AI, in cases where it is necessary and possible (e.g., medical care using AI), humans may be expected to make decisions as to whether to use the judgments of AI, how to use it etc. In those cases, what can be considered as criteria for the necessity of human intervention? In the utilization of AI that operates through actuators, etc., in the case where it is planned to shift to human operation under certain conditions, what kind of matters are expected to be paid attention to? [Points of view as criteria (example)] • The nature of the rights and interests of indirect users, et al., and their intents, affected by the judgments of AI. • The degree of reliability of the judgment of AI (compared with reliability of human judgment). • Allowable time necessary for human judgment • Ability expected to be possessed by users E) Role assignments among users With consideration of the volume of capabilities and knowledge on AI that each user is expected to have and ease of implementing necessary measures, users may be expected to play such roles as seems to be appropriate and also to bear the responsibility. F) Cooperation among stakeholders Users and data providers may be expected to cooperate with stakeholders and to work on preventive or remedial measures (including information sharing, stopping and restoration of AI, elucidation of causes, measures to prevent recurrence, etc.) in accordance with the nature, conditions, etc. of damages caused by accidents, security breaches, privacy infringement, etc. that may occur in the future or have occurred through the use of AI. What is expected reasonable from a users point of view to ensure the above effectiveness?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

6. Principle of privacy

Users and data providers should take into consideration that the utilization of AI systems or AI services will not infringe on the privacy of users’ or others. [Main points to discuss] A) Respect for the privacy of others With consideration of social contexts and reasonable expectations of people in the utilization of AI, users may be expected to respect the privacy of others in the utilization of AI. In addition, users may be expected to consider measures to be taken against privacy infringement caused by AI in advance. B) Respect for the privacy of others in the collection, analysis, provision, etc. of personal data Users and data providers may be expected to respect the privacy of others in the collection, analysis, provision, etc. of personal data used for learning or other methods of AI. C) Consideration for the privacy, etc. of the subject of profiling which uses AI In the case of profiling by using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of personnel evaluation, recruitment, and financing, AI service providers and business users may be expected to pay due consideration to the privacy, etc. of the subject of profiling. D) Attention to the infringement of the privacy of users’ or others Consumer users may be expected to pay attention not to give information that is highly confidential (including information on others as well as information on users’ themselves) to AI carelessly, by excessively empathizing with AI such as pet robots, or by other causes. E) Prevention of personal data leakage AI service providers, business users, and data providers may be expected to take appropriate measures so that personal data should not be provided by the judgments of AI to third parties without consent of the person.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

9. Principle of transparency

AI service providers and business users should pay attention to the verifiability of inputs outputs of AI systems or AI services and the explainability of their judgments. Note: This principle is not intended to ask for the disclosure of algorithm, source code, or learning data. In interpreting this principle, privacy of individuals and trade secrets of enterprises are also taken into account. [Main points to discuss] A) Recording and preserving the inputs outputs of AI In order to ensure the verifiability of the input and output of AI, AI service providers and business users may be expected to record and preserve the inputs and outputs. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent are the inputs and outputs expected to be recorded and preserved? For example, in the case of using AI in fields where AI systems might harm the life, body, or property, such as the field of autonomous driving, the inputs and outputs of AI may be expected to be recorded and preserved to the extent whch is necessary for investigating the causes of accidents and preventing the recurrence of such accidents. B) Ensuring explainability AI service providers and business users may be expected to ensure explainability on the judgments of AI. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is explainability expected to be ensured? Especially in the case of using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of medical care, personnel evaluation and recruitment and financing, explainability on the judgments of AI may be expected to be ensured. (For example, we have to pay attention to the current situation where deep learning has high prediction accuracy, but it is difficult to explain its judgment.)

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· ⑩ Transparency

In order to build social trust, efforts should be made, while taking into account possible conflicts with other principles, to improve the transparency and explainability of AI to a level suitable for the use cases of the AI system. When providing AI powered products or services, the AI provider should inform users in advance about what the AI does and what risks may arise during its use.

Published by The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI) in National AI Ethical Guidelines, Dec 23, 2020