5 Safety and Reliability

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall adopt design regimes and standards ensuring high safety and reliability of AI systems on one hand while limiting the exposure of developers and deployers on the other hand.
Principle: The Eight Principles of Responsible AI, May 23, 2019

Published by International Technology Law Association (ITechLaw)

Related Principles

3. Security and Safety

AI systems should be safe and sufficiently secure against malicious attacks. Safety refers to ensuring the safety of developers, deployers, and users of AI systems by conducting impact or risk assessments and ensuring that known risks have been identified and mitigated. A risk prevention approach should be adopted, and precautions should be put in place so that humans can intervene to prevent harm, or the system can safely disengage itself in the event an AI system makes unsafe decisions autonomous vehicles that cause injury to pedestrians are an illustration of this. Ensuring that AI systems are safe is essential to fostering public trust in AI. Safety of the public and the users of AI systems should be of utmost priority in the decision making process of AI systems and risks should be assessed and mitigated to the best extent possible. Before deploying AI systems, deployers should conduct risk assessments and relevant testing or certification and implement the appropriate level of human intervention to prevent harm when unsafe decisions take place. The risks, limitations, and safeguards of the use of AI should be made known to the user. For example, in AI enabled autonomous vehicles, developers and deployers should put in place mechanisms for the human driver to easily resume manual driving whenever they wish. Security refers to ensuring the cybersecurity of AI systems, which includes mechanisms against malicious attacks specific to AI such as data poisoning, model inversion, the tampering of datasets, byzantine attacks in federated learning5, as well as other attacks designed to reverse engineer personal data used to train the AI. Deployers of AI systems should work with developers to put in place technical security measures like robust authentication mechanisms and encryption. Just like any other software, deployers should also implement safeguards to protect AI systems against cyberattacks, data security attacks, and other digital security risks. These may include ensuring regular software updates to AI systems and proper access management for critical or sensitive systems. Deployers should also develop incident response plans to safeguard AI systems from the above attacks. It is also important for deployers to make a minimum list of security testing (e.g. vulnerability assessment and penetration testing) and other applicable security testing tools. Some other important considerations also include: a. Business continuity plan b. Disaster recovery plan c. Zero day attacks d. IoT devices

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

7 Privacy

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall endeavour to ensure that AI systems are compliant with privacy norms and regulations, taking into account the unique characteristics of AI systems, and the evolution of standards on privacy.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

1. Principle of collaboration

Developers should pay attention to the interconnectivity and interoperability of AI systems. [Comment] Developers should give consideration to the interconnectivity and interoperability between the AI systems that they have developed and other AI systems, etc. with consideration of the diversity of AI systems so that: (a) the benefits of AI systems should increase through the sound progress of AI networking; and that (b) multiple developers’ efforts to control the risks should be coordinated well and operate effectively. For this, developers should pay attention to the followings: • To make efforts to cooperate to share relevant information which is effective in ensuring interconnectivity and interoperability. • To make efforts to develop AI systems conforming to international standards, if any. • To make efforts to address the standardization of data formats and the openness of interfaces and protocols including application programming interface (API). • To pay attention to risks of unintended events as a result of the interconnection or interoperations between AI systems that they have developed and other AI systems, etc. • To make efforts to promote open and fair treatment of license agreements for and their conditions of intellectual property rights, such as standard essential patents, contributing to ensuring the interconnectivity and interoperability between AI systems and other AI systems, etc., while taking into consideration the balance between the protection and the utilization with respect to intellectual property related to the development of AI. [Note] The interoperability and interconnectivity in this context expects that AI systems which developers have developed can be connected to information and communication networks, thereby can operate with other AI systems, etc. in mutually and appropriately harmonized manners.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

· 6. MAXIMUM TRANSPARENCY AND RELIABILITY OF INFORMATION CONCERNING THE LEVEL OF AI TECHNOLOGIES DEVELOPMENT, THEIR CAPABILITIES AND RISKS ARE CRUCIAL

6.1. Reliability of information about AI systems. AI Actors are encouraged to provide AI systems users with reliable information about the AI systems and most effective methods of their use, harms, benefits acceptable areas and existing limitations of their use. 6.2. Awareness raising in the field of ethical AI application. AI Actors are encouraged to carry out activities aimed at increasing the level of trust and awareness of the citizens who use AI systems and the society at large, in the field of technologies being developed, the specifics of ethical use of AI systems and other issues related to AI systems development by all available means, i.a. by working on scientific and journalistic publications, organizing scientific and public conferences or seminars, as well as by adding the provisions about ethical behavior to the rules of AI systems operation for users and (or) operators, etc.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)