Ensure “Interpretability” of AI systems
Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices.
Ensure “Interpretability” of AI systems
Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident.
Responsible Deployment
Principle: The capacity of an AI agent to act autonomously, and to adapt its behavior over time without human direction, calls for significant safety checks before deployment, and ongoing monitoring.
Responsible Deployment
There may also be a need to incorporate human checks on new decision making strategies in AI system design, especially where the risk to human life and safety is great.
Responsible Deployment
Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI agent’s safe interaction with its environment (digital or physical) and that it functions as intended.
Responsible Deployment
Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI agent’s safe interaction with its environment (digital or physical) and that it functions as intended.
Open Governance
Principle: The ability of various stakeholders, whether civil society, government, private sector or academia and the technical community, to inform and participate in the governance of AI is crucial for its safe deployment.