· Safety and security

27. Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security. Safe and secure AI will be enabled by the development of sustainable, privacy protective data access frameworks that foster better training and validation of AI models utilizing quality data.
Principle: The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO)

Related Principles

3. Security and Safety

AI systems should be safe and sufficiently secure against malicious attacks. Safety refers to ensuring the safety of developers, deployers, and users of AI systems by conducting impact or risk assessments and ensuring that known risks have been identified and mitigated. A risk prevention approach should be adopted, and precautions should be put in place so that humans can intervene to prevent harm, or the system can safely disengage itself in the event an AI system makes unsafe decisions autonomous vehicles that cause injury to pedestrians are an illustration of this. Ensuring that AI systems are safe is essential to fostering public trust in AI. Safety of the public and the users of AI systems should be of utmost priority in the decision making process of AI systems and risks should be assessed and mitigated to the best extent possible. Before deploying AI systems, deployers should conduct risk assessments and relevant testing or certification and implement the appropriate level of human intervention to prevent harm when unsafe decisions take place. The risks, limitations, and safeguards of the use of AI should be made known to the user. For example, in AI enabled autonomous vehicles, developers and deployers should put in place mechanisms for the human driver to easily resume manual driving whenever they wish. Security refers to ensuring the cybersecurity of AI systems, which includes mechanisms against malicious attacks specific to AI such as data poisoning, model inversion, the tampering of datasets, byzantine attacks in federated learning5, as well as other attacks designed to reverse engineer personal data used to train the AI. Deployers of AI systems should work with developers to put in place technical security measures like robust authentication mechanisms and encryption. Just like any other software, deployers should also implement safeguards to protect AI systems against cyberattacks, data security attacks, and other digital security risks. These may include ensuring regular software updates to AI systems and proper access management for critical or sensitive systems. Deployers should also develop incident response plans to safeguard AI systems from the above attacks. It is also important for deployers to make a minimum list of security testing (e.g. vulnerability assessment and penetration testing) and other applicable security testing tools. Some other important considerations also include: a. Business continuity plan b. Disaster recovery plan c. Zero day attacks d. IoT devices

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2021

Safety and security.

Unintended harm (security risks) and vulnerabilities to attacks (protection risks) should be avoided and should be considered, prevented and eliminated throughout the lifecycle of AI systems to ensure the safety and security of humans, the environment and ecosystems.

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023

Safety and Controllability

Safety and security of Biodiversity Conservation related AI applications and services should be ensured. These AI applications and services should follow the principles of prudence and precaution, should be adequately tested for accuracy and robustness, and should be with meaningful human control. Negative impacts on biodiversity due to AI safety and security hazards should be avoided.

Published by International Research Center for AI Ethics and Governance, Instituteof Automation, Chinese Academy of Sciences,World Animal Protection Beijing Representative Office and other 7 entities in Principles on Artificial Intelligence for Biodiversity Conservation, August 25, 2022

Safety and security

Safety and security risks should be identified, addressed and mitigated throughout the AI system lifecycle to prevent where possible, and or limit, any potential or actual harm to humans, the environment and ecosystems. Safe and secure AI systems should be enabled through robust frameworks.

Published by United Nations System Chief Executives Board for Coordination in Principles for the Ethical Use of Artificial Intelligence in the United Nations System, Sept 20, 2022

9. Safety and Security

Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Agencies should give additional consideration to methods for guaranteeing systemic resilience, and for preventing bad actors from exploiting AI system weaknesses, including cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology. When evaluating or introducing AI policies, agencies should be mindful of any potential safety and security risks, as well as the risk of possible malicious deployment and use of AI applications.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020