Reliability and safety
responsibility should be clearly and appropriately identified, for ensuring that an AI system is robust and safe.
Transparency and explainability
There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
Transparency and explainability
This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life.
Transparency and explainability
Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons
Transparency and explainability
responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes.
Accountability
Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
Accountability
This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate.
Accountability
Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.