Reliability & Safety

AI systems should perform reliably and safely.
Principle: Microsoft AI Principles, Jan 17, 2018 (unconfirmed)

Published by Microsoft

Related Principles

· Article 5: Secure safe and controllable.

Ensure that AI systems operate securely safely, reliably, and controllably throughout their lifecycle. Evaluate system security safety and potential risks, and continuously improve system maturity, robustness, and anti tampering capabilities. Ensure that the system can be supervised and promptly taken over by humans to avoid the negative effects of loss of system control.

Published by Artificial Intelligence Industry Alliance (AIIA), China in Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment), May 31, 2019

7. Robustness and Reliability

AI systems should be sufficiently robust to cope with errors during execution and unexpected or erroneous input, or cope with stressful environmental conditions. It should also perform consistently. AI systems should, where possible, work reliably and have consistent results for a range of inputs and situations. AI systems may have to operate in real world, dynamic conditions where input signals and conditions change quickly. To prevent harm, AI systems need to be resilient to unexpected data inputs, not exhibit dangerous behaviour, and continue to perform according to the intended purpose. Notably, AI systems are not infallible and deployers should ensure proper access control and protection of critical or sensitive systems and take actions to prevent or mitigate negative outcomes that occur due to unreliable performances. Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across a range of situations and environments. Measures such as proper documentation of data sources, tracking of data processing steps, and data lineage can help with troubleshooting AI systems.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

Reliability and safety

Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose. This principle aims to ensure that AI systems reliably operate in accordance with their intended purpose throughout their lifecycle. This includes ensuring AI systems are reliable, accurate and reproducible as appropriate. AI systems should not pose unreasonable safety risks, and should adopt safety measures that are proportionate to the magnitude of potential risks. AI systems should be monitored and tested to ensure they continue to meet their intended purpose, and any identified problems should be addressed with ongoing risk management as appropriate. Responsibility should be clearly and appropriately identified, for ensuring that an AI system is robust and safe.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

5. Safety and Controllability

The transparency, interpretability, reliability, and controllability of AI systems should be improved continuously to make the systems more traceable, trustworthy, and easier to audit and monitor. AI safety at different levels of the systems should be ensured, AI robustness and anti interference performance should be improved, and AI safety assessment and control capacities should be developed.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

5. Reliability:

AI systems must be able to work reliably;

Published by The Pontifical Academy for Life, Microsoft, IBM, FAO, the Italia Government in Rome Call for AI Ethics, Feb 28, 2020