Safety & security

AI systems are built to prevent misuse and reduce the risk of being compromised.
Principle: Tieto’s AI ethics guidelines, Oct 17, 2018

Published by Tieto

Related Principles

Privacy protection and security

Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data. This principle aims to ensure respect for privacy and data protection when using AI systems. This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. For example, maintaining privacy through appropriate data anonymisation where used by AI systems. Further, the connection between data, and inferences drawn from that data by AI systems, should be sound and assessed in an ongoing manner. This principle also aims to ensure appropriate data and AI system security measures are in place. This includes the identification of potential security vulnerabilities, and assurance of resilience to adversarial attacks. Security measures should account for unintended applications of AI systems, and potential abuse risks, with appropriate mitigation measures.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

5. Safety and Controllability

The transparency, interpretability, reliability, and controllability of AI systems should be improved continuously to make the systems more traceable, trustworthy, and easier to audit and monitor. AI safety at different levels of the systems should be ensured, AI robustness and anti interference performance should be improved, and AI safety assessment and control capacities should be developed.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

11. Robustness and Security

AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

6. Safe and secure

Our solutions are built and tested to prevent possible misuse and reduce the risk of being compromised or causing harm.

Published by Telia Company AB in Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

9. Safety and Security

Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Agencies should give additional consideration to methods for guaranteeing systemic resilience, and for preventing bad actors from exploiting AI system weaknesses, including cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology. When evaluating or introducing AI policies, agencies should be mindful of any potential safety and security risks, as well as the risk of possible malicious deployment and use of AI applications.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020