3. Principle of controllability

Developers should pay attention to the controllability of AI systems. [Comment] In order to assess the risks related to the controllability of AI systems, it is encouraged that developers make efforts to conduct verification and validation in advance. One of the conceivable methods of risk assessment is to conduct experiments in a closed space such as in a laboratory or a sandbox in which security is ensured, at a stage before the practical application in society. In addition, in order to ensure the controllability of AI systems, it is encouraged that developers pay attention to whether the supervision (such as monitoring or warnings) and countermeasures (such as system shutdown, cut off from networks, or repairs) by humans or other trustworthy AI systems are effective, to the extent possible in light of the characteristics of the technologies to be adopted. [Note] Verification and validation are methods for evaluating and controlling risks in advance. Generally, the former is used for confirming formal consistency, while the latter is used for confirming substantial validity. (See, e.g., The Future of Life Institute (FLI), Research Priorities for Robust and Beneficial Artificial Intelligence (2015)). [Note] Examples of what to see in the risk assessment are risks of reward hacking in which AI systems formally achieve the goals assigned but substantially do not meet the developer's intents, and risks that AI systems work in ways that the developers have not intended due to the changes of their outputs and programs in the process of the utilization with their learning, etc. For reward hacking, see, e.g., Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman & Dan Mané, Concrete Problems in AI Safety, arXiv: 1606.06565 [cs.AI] (2016).
Principle: AI R&D Principles, Jul 28, 2017

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Related Principles

1. Transparency and Explainability

Transparency refers to providing disclosure on when an AI system is being used and the involvement of an AI system in decision making, what kind of data it uses, and its purpose. By disclosing to individuals that AI is used in the system, individuals will become aware and can make an informed choice of whether to use the AIenabled system. Explainability is the ability to communicate the reasoning behind an AI system’s decision in a way that is understandable to a range of people, as it is not always clear how an AI system has arrived at a conclusion. This allows individuals to know the factors contributing to the AI system’s recommendation. In order to build public trust in AI, it is important to ensure that users are aware of the use of AI technology and understand how information from their interaction is used and how the AI system makes its decisions using the information provided. In line with the principle of transparency, deployers have a responsibility to clearly disclose the implementation of an AI system to stakeholders and foster general awareness of the AI system being used. With the increasing use of AI in many businesses and industries, the public is becoming more aware and interested in knowing when they are interacting with AI systems. Knowing when and how AI systems interact with users is also important in helping users discern the potential harm of interacting with an AI system that is not behaving as intended. In the past, AI algorithms have been found to discriminate against female job applicants and have failed to accurately recognise the faces of dark skinned women. It is important for users to be aware of the expected behaviour of the AI systems so they can make more informed decisions about the potential harm of interacting with AI systems. An example of transparency in an AI enabled ecommerce platform is informing users that their purchase history is used by the platform’s recommendation algorithm to identify similar products and display them on the users’ feeds. In line with the principle of explainability, developers and deployers designing, developing, and deploying AI systems should also strive to foster general understanding among users of how such systems work with simple and easy to understand explanations on how the AI system makes decisions. Understanding how AI systems work will help humans know when to trust its decisions. Explanations can have varying degrees of complexity, ranging from a simple text explanation of which factors more significantly affected the decisionmaking process to displaying a heatmap over the relevant text or on the area of an image that led to the system’s decision. For example, when an AI system is used to predict the likelihood of cardiac arrest in patients, explainability can be implemented by informing medical professionals of the most significant factors (e.g., age, blood pressure, etc.) that influenced the AI system’s decision so that they can subsequently make informed decisions on their own. Where “black box” models are deployed, rendering it difficult, if not impossible to provide explanations as to the workings of the AI system, outcome based explanations, with a focus on explaining the impact of decisionmaking or results flowing from the AI system may be relied on. Alternatively, deployers may also consider focusing on aspects relating to the quality of the AI system or preparing information that could build user confidence in the outcomes of an AI system’s processing behaviour. Some of these measures are: • Documenting the repeatability of results produced by the AI system. Some practices to demonstrate repeatability include conducting repeatability assessments to ensure deployments in live environments are repeatable and performing counterfactual fairness testing to ensure that the AI system’s decisions are the same in both the real world and in the counterfactual world. Repeatability refers to the ability of the system to consistently obtain the same results, given the same scenario. Repeatability often applies within the same environment, with the same data and the same computational conditions. • Ensuring traceability by building an audit trail to document the AI system development and decisionmaking process, implementing a black box recorder that captures all input data streams, or storing data appropriately to avoid degradation and alteration. • Facilitating auditability by keeping a comprehensive record of data provenance, procurement, preprocessing, lineage, storage, and security. Such information can also be centralised digitally in a process log to increase capacity to cater the presentation of results to different tiers of stakeholders with different interests and levels of expertise. Deployers should, however, note that auditability does not necessarily entail making certain confidential information about business models or intellectual property related to the AI system publicly available. A risk based approach can be taken towards identifying the subset of AI enabled features in the AI system for which implemented auditability is necessary to align with regulatory requirements or industry practices. • Using AI Model Cards, which are short documents accompanying trained machine learning models that disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. In cases where AI systems are procured directly from developers, deployers will have to work together with these developers to achieve transparency. More on this will be covered in later sections of the Guide.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

5. Principle of security

Developers should pay attention to the security of AI systems. [Comment] In addition to respecting international guidelines on security such as “OECD Guidelines for the Security of Information Systems and Networks,” it is encouraged that developers pay attention to the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning or other methods: ● To pay attention, as necessary, to the reliability (that is, whether the operations are performed as intended and not steered by unauthorized third parties) and robustness (that is, tolerance to physical attacks and accidents) of AI systems, in addition to: (a) confidentiality; (b) integrity; and (c) availability of information that are usually required for ensuring the information security of AI systems. ● To make efforts to conduct verification and validation in advance in order to assess and control the risks related to the security of AI systems. ● To make efforts to take measures to maintain the security to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of the development of AI systems (“security by design”).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

9. Principle of transparency

AI service providers and business users should pay attention to the verifiability of inputs outputs of AI systems or AI services and the explainability of their judgments. Note: This principle is not intended to ask for the disclosure of algorithm, source code, or learning data. In interpreting this principle, privacy of individuals and trade secrets of enterprises are also taken into account. [Main points to discuss] A) Recording and preserving the inputs outputs of AI In order to ensure the verifiability of the input and output of AI, AI service providers and business users may be expected to record and preserve the inputs and outputs. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent are the inputs and outputs expected to be recorded and preserved? For example, in the case of using AI in fields where AI systems might harm the life, body, or property, such as the field of autonomous driving, the inputs and outputs of AI may be expected to be recorded and preserved to the extent whch is necessary for investigating the causes of accidents and preventing the recurrence of such accidents. B) Ensuring explainability AI service providers and business users may be expected to ensure explainability on the judgments of AI. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is explainability expected to be ensured? Especially in the case of using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of medical care, personnel evaluation and recruitment and financing, explainability on the judgments of AI may be expected to be ensured. (For example, we have to pay attention to the current situation where deep learning has high prediction accuracy, but it is difficult to explain its judgment.)

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

· 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

2.1. Risk based approach. The level of attention to ethical issues in AI and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific technologies and AISs and the interests of individuals and society. Risk level assessment must take into account both the known and possible risks; in this case, the level of probability of threats should be taken into account as well as their possible scale in the short and long term. In the field of AI development, making decisions that are significant to society and the state should be accompanied by scientifically verified and interdisciplinary forecasting of socio economic consequences and risks, as well as by the examination of possible changes in the value and cultural paradigm of the development of society, while taking into account national priorities. In pursuance of this Code, the development and use of an AIS risk assessment methodology is recommended. 2.2. Responsible attitude. AI Actors should have a responsible approach to the aspects of AIS that influence society and citizens at every stage of the AIS life cycle. These include privacy; the ethical, safe and responsible use of personal data; the nature, degree and amount of damage that may follow as a result of the use of the technology and AIS; and the selection and use of companion hardware and software. In this case, the responsibility of the AI Actors must correspond to the nature, degree and amount of damage that may occur as a result of the use of technologies and AIS, while taking into account the role of the AI Actor in the life cycle of AIS, as well as the degree of possible and real impact of a particular AI Actor on causing damage, as well as its size. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, the occurrence of which the corresponding AI Actor can reasonably assume, measures should be taken to prevent or limit the occurrence of such consequences. To assess the moral acceptability of consequences and the possible measures to prevent them, Actors can use the provisions of this Code, including the mechanisms specified in Section 2. 2.4. No harm. AI Actors should not allow use of AI technologies for the purpose of causing harm to human life, the environment and or the health or property of citizens and legal entities. Any application of an AIS capable of purposefully causing harm to the environment, human life or health or the property of citizens and legal entities during any stage, including design, development, testing, implementation or operation, is unacceptable. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are informed of their interactions with the AIS when it affects their rights and critical areas of their lives and to ensure that such interactions can be terminated at the request of the user. 2.6. Data security AI Actors must comply with the legislation of the Russian Federation in the field of personal data and secrets protected by law when using an AIS. Furthermore, they must ensure the protection and protection of personal data processed by an AIS or AI Actors in order to develop and improve the AIS by developing and implementing innovative methods of controlling unauthorized access by third parties to personal data and using high quality and representative datasets from reliable sources and obtained without breaking the law. 2.7. Information security. AI Actors should provide the maximum possible protection against unauthorized interference in the work of the AI by third parties by introducing adequate information security technologies, including the use of internal mechanisms for protecting the AIS from unauthorized interventions and informing users and developers about such interventions. They must also inform users about the rules regarding information security when using the AIS. 2.8. Voluntary certification and Code compliance. AI Actors can implement voluntary certification for the compliance of the developed AI technologies with the standards established by the legislation of the Russian Federation and this Code. AI Actors can create voluntary certification and AIS labeling systems that indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AISs. AI Actors are encouraged to collaborate in the identification and verification of methods and forms of creating universal ("strong") AIS and the prevention of the possible threats that AIS carry. The use of "strong" AI technologies should be under the control of the state.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021