Transparency and explainability
There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question.
Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons
for users, what the system is doing and why
for creators, including those undertaking the validation and certification of AI, the systems’ processes and input data
for those deploying and operating the system, to understand processes and input data
for an accident investigator, if accidents occur
for regulators in the context of investigations
for those in the legal process, to inform evidence and decision‐making
for the public, to build confidence in the technology
Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making.
This principle also aims to ensure people have the ability to find out when an AI system is engaging with them (regardless of the level of impact), and are able to obtain a reasonable disclosure regarding the AI system.
Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019