5. Security

As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. In the development and use of AI, members of the JSAI will always pay attention to safety, controllability, and required confidentiality while ensuring that users of AI are provided appropriate and sufficient information.
Principle: The Japanese Society for Artificial Intelligence Ethical Guidelines, Feb 28, 2017

Published by The Japanese Society for Artificial Intelligence (JSAI)

Related Principles

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

5. Principle of security

Developers should pay attention to the security of AI systems. [Comment] In addition to respecting international guidelines on security such as “OECD Guidelines for the Security of Information Systems and Networks,” it is encouraged that developers pay attention to the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning or other methods: ● To pay attention, as necessary, to the reliability (that is, whether the operations are performed as intended and not steered by unauthorized third parties) and robustness (that is, tolerance to physical attacks and accidents) of AI systems, in addition to: (a) confidentiality; (b) integrity; and (c) availability of information that are usually required for ensuring the information security of AI systems. ● To make efforts to conduct verification and validation in advance in order to assess and control the risks related to the security of AI systems. ● To make efforts to take measures to maintain the security to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of the development of AI systems (“security by design”).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

1. Principle of proper utilization

Users should make efforts to utilize AI systems or AI services in a proper scope and manner, under the proper assignment of roles between humans and AI systems, or among users. [Main points to discuss] A) Utilization in the proper scope and manner On the basis of the provision of information and explanation from developers, etc. and with consideration of social contexts and circumstances, users may be expected to use AI in the proper scope and manner. In addition, users may be expected to recognize benefits and risks, understand proper uses, acquire necessary knowledge and skills and so on before using AI, according to the characteristics, usage situations, etc. of AI. Furthermore, users may be expected to check regularly whether they use AI in an appropriate scope and manner. B) Proper balance of benefits and risks of AI AI service providers and business users may be expected to take into consideration proper balance between benefits and risks of AI, including the consideration of the active use of AI for productivity and work efficiency improvements, after appropriately assessing risks of AI. C) Updates of AI software and inspections repairs, etc. of AI Through the process of utilization, users may be expected to make efforts to update AI software and perform inspections, repairs, etc. of AI in order to improve the function of AI and to mitigate risks. D) Human Intervention Regarding the judgment made by AI, in cases where it is necessary and possible (e.g., medical care using AI), humans may be expected to make decisions as to whether to use the judgments of AI, how to use it etc. In those cases, what can be considered as criteria for the necessity of human intervention? In the utilization of AI that operates through actuators, etc., in the case where it is planned to shift to human operation under certain conditions, what kind of matters are expected to be paid attention to? [Points of view as criteria (example)] • The nature of the rights and interests of indirect users, et al., and their intents, affected by the judgments of AI. • The degree of reliability of the judgment of AI (compared with reliability of human judgment). • Allowable time necessary for human judgment • Ability expected to be possessed by users E) Role assignments among users With consideration of the volume of capabilities and knowledge on AI that each user is expected to have and ease of implementing necessary measures, users may be expected to play such roles as seems to be appropriate and also to bear the responsibility. F) Cooperation among stakeholders Users and data providers may be expected to cooperate with stakeholders and to work on preventive or remedial measures (including information sharing, stopping and restoration of AI, elucidation of causes, measures to prevent recurrence, etc.) in accordance with the nature, conditions, etc. of damages caused by accidents, security breaches, privacy infringement, etc. that may occur in the future or have occurred through the use of AI. What is expected reasonable from a users point of view to ensure the above effectiveness?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

· 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

2.1. Risk based approach. The level of attention to ethical issues in AI and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific technologies and AISs and the interests of individuals and society. Risk level assessment must take into account both the known and possible risks; in this case, the level of probability of threats should be taken into account as well as their possible scale in the short and long term. In the field of AI development, making decisions that are significant to society and the state should be accompanied by scientifically verified and interdisciplinary forecasting of socio economic consequences and risks, as well as by the examination of possible changes in the value and cultural paradigm of the development of society, while taking into account national priorities. In pursuance of this Code, the development and use of an AIS risk assessment methodology is recommended. 2.2. Responsible attitude. AI Actors should have a responsible approach to the aspects of AIS that influence society and citizens at every stage of the AIS life cycle. These include privacy; the ethical, safe and responsible use of personal data; the nature, degree and amount of damage that may follow as a result of the use of the technology and AIS; and the selection and use of companion hardware and software. In this case, the responsibility of the AI Actors must correspond to the nature, degree and amount of damage that may occur as a result of the use of technologies and AIS, while taking into account the role of the AI Actor in the life cycle of AIS, as well as the degree of possible and real impact of a particular AI Actor on causing damage, as well as its size. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, the occurrence of which the corresponding AI Actor can reasonably assume, measures should be taken to prevent or limit the occurrence of such consequences. To assess the moral acceptability of consequences and the possible measures to prevent them, Actors can use the provisions of this Code, including the mechanisms specified in Section 2. 2.4. No harm. AI Actors should not allow use of AI technologies for the purpose of causing harm to human life, the environment and or the health or property of citizens and legal entities. Any application of an AIS capable of purposefully causing harm to the environment, human life or health or the property of citizens and legal entities during any stage, including design, development, testing, implementation or operation, is unacceptable. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are informed of their interactions with the AIS when it affects their rights and critical areas of their lives and to ensure that such interactions can be terminated at the request of the user. 2.6. Data security AI Actors must comply with the legislation of the Russian Federation in the field of personal data and secrets protected by law when using an AIS. Furthermore, they must ensure the protection and protection of personal data processed by an AIS or AI Actors in order to develop and improve the AIS by developing and implementing innovative methods of controlling unauthorized access by third parties to personal data and using high quality and representative datasets from reliable sources and obtained without breaking the law. 2.7. Information security. AI Actors should provide the maximum possible protection against unauthorized interference in the work of the AI by third parties by introducing adequate information security technologies, including the use of internal mechanisms for protecting the AIS from unauthorized interventions and informing users and developers about such interventions. They must also inform users about the rules regarding information security when using the AIS. 2.8. Voluntary certification and Code compliance. AI Actors can implement voluntary certification for the compliance of the developed AI technologies with the standards established by the legislation of the Russian Federation and this Code. AI Actors can create voluntary certification and AIS labeling systems that indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AISs. AI Actors are encouraged to collaborate in the identification and verification of methods and forms of creating universal ("strong") AIS and the prevention of the possible threats that AIS carry. The use of "strong" AI technologies should be under the control of the state.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)