5. Reliability:

AI systems must be able to work reliably;
Principle: Rome Call for AI Ethics, Feb 28, 2020

Published by The Pontifical Academy for Life, Microsoft, IBM, FAO, the Italia Government

Related Principles

7. Robustness and Reliability

AI systems should be sufficiently robust to cope with errors during execution and unexpected or erroneous input, or cope with stressful environmental conditions. It should also perform consistently. AI systems should, where possible, work reliably and have consistent results for a range of inputs and situations. AI systems may have to operate in real world, dynamic conditions where input signals and conditions change quickly. To prevent harm, AI systems need to be resilient to unexpected data inputs, not exhibit dangerous behaviour, and continue to perform according to the intended purpose. Notably, AI systems are not infallible and deployers should ensure proper access control and protection of critical or sensitive systems and take actions to prevent or mitigate negative outcomes that occur due to unreliable performances. Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across a range of situations and environments. Measures such as proper documentation of data sources, tracking of data processing steps, and data lineage can help with troubleshooting AI systems.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

5. Safety and Controllability

The transparency, interpretability, reliability, and controllability of AI systems should be improved continuously to make the systems more traceable, trustworthy, and easier to audit and monitor. AI safety at different levels of the systems should be ensured, AI robustness and anti interference performance should be improved, and AI safety assessment and control capacities should be developed.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

· AI systems will be safe, secure and controllable by humans

1. Safety and security of the people, be they operators, end users or other parties, will be of paramount concern in the design of any AI system 2. AI systems should be verifiably secure and controllable throughout their operational lifetime, to the extent permitted by technology 3. The continued security and privacy of users should be considered when decommissioning AI systems 4. AI systems that may directly impact people’s lives in a significant way should receive commensurate care in their designs, and; 5. Such systems should be able to be overridden or their decisions reversed by designated people

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019

Fifth principle: Reliability

AI enabled systems must be demonstrably reliable, robust and secure. The MOD’s AI enabled systems must be suitably reliable; they must fulfil their intended design and deployment criteria and perform as expected, within acceptable performance parameters. Those parameters must be regularly reviewed and tested for reliability to be assured on an ongoing basis, particularly as AI enabled systems learn and evolve over time, or are deployed in new contexts. Given Defence’s unique operational context and the challenges of the information environment, this principle also requires AI enabled systems to be secure, and a robust approach to cybersecurity, data protection and privacy. MOD personnel working with or alongside AI enabled systems can build trust in those systems by ensuring that they have a suitable level of understanding of the performance and parameters of those systems, as articulated in the principle of understanding.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022