5. Safety and Controllability

The transparency, interpretability, reliability, and controllability of AI systems should be improved continuously to make the systems more traceable, trustworthy, and easier to audit and monitor. AI safety at different levels of the systems should be ensured, AI robustness and anti interference performance should be improved, and AI safety assessment and control capacities should be developed.
Principle: Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

Published by National Governance Committee for the New Generation Artificial Intelligence, China

Related Principles

Safety and Controllability

Safety and security of Biodiversity Conservation related AI applications and services should be ensured. These AI applications and services should follow the principles of prudence and precaution, should be adequately tested for accuracy and robustness, and should be with meaningful human control. Negative impacts on biodiversity due to AI safety and security hazards should be avoided.

Published by International Research Center for AI Ethics and Governance, Instituteof Automation, Chinese Academy of Sciences,World Animal Protection Beijing Representative Office and other 7 entities in Principles on Artificial Intelligence for Biodiversity Conservation, August 25, 2022

· 1.2 Safety and Controllability

Technologists have a responsibility to ensure the safe design of AI systems. Autonomous AI agents must treat the safety of users and third parties as a paramount concern, and AI technologies should strive to reduce risks to humans. Furthermore, the development of autonomous AI systems must have safeguards to ensure controllability of the AI system by humans, tailored to the specific context in which a particular system operates.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

· Build and Validate:

1 Privacy and security by design should be implemented while building the AI system. The security mechanisms should include the protection of various architectural dimensions of an AI model from malicious attacks. The structure and modules of the AI system should be protected from unauthorized modification or damage to any of its components. 2 The AI system should be secure to ensure and maintain the integrity of the information it processes. This ensures that the system remains continuously functional and accessible to authorized users. It is crucial that the system safeguards confidential and private information, even under hostile or adversarial conditions. Furthermore, appropriate measures should be in place to ensure that AI systems with automated decision making capabilities uphold the necessary data privacy and security standards. 3 The AI System should be tested to ensure that the combination of available data does not reveal the sensitive data or break the anonymity of the observation. Deploy and Monitor: 1 After the deployment of the AI system, when its outcomes are realized, there must be continuous monitoring to ensure that the AI system is privacy preserving, safe and secure. The privacy impact assessment and risk management assessment should be continuously revisited to ensure that societal and ethical considerations are regularly evaluated. 2 AI System Owners should be accountable for the design and implementation of AI systems in such a way as to ensure that personal information is protected throughout the life cycle of the AI system. The components of the AI system should be updated based on continuous monitoring and privacy impact assessment.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Deploy and Monitor:

1 Upon deployment of the AI system, performance metrics relating the AI system’s output, accuracy and alignment to priorities and objectives, as well as its measured impact on individuals and communities should be documented, available and accessible to stakeholders of the AI technology. 2 Information on any system failures, data breaches, system breakdowns, etc. should be logged and stakeholders should be informed about these instances keeping the performance and execution of the AI system transparent. Periodic UI and UX testing should be conducted to avoid the risk of confusion, confirmation of biases, or cognitive fatigue of the AI system.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022