· 5. The Principle of Explicability: “Operate transparently”

Transparency is key to building and maintaining citizen’s trust in the developers of AI systems and AI systems themselves. Both technological and business model transparency matter from an ethical standpoint. Technological transparency implies that AI systems be auditable, comprehensible and intelligible by human beings at varying levels of comprehension and expertise. Business model transparency means that human beings are knowingly informed of the intention of developers and technology implementers of AI systems. Explicability is a precondition for achieving informed consent from individuals interacting with AI systems and in order to ensure that the principle of explicability and non maleficence are achieved the requirement of informed consent should be sought. Explicability also requires accountability measures be put in place. Individuals and groups may request evidence of the baseline parameters and instructions given as inputs for AI decision making (the discovery or prediction sought by an AI system or the factors involved in the discovery or prediction made) by the organisations and developers of an AI system, the technology implementers, or another party in the supply chain.
Principle: Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence

Related Principles

Transparency and explainability

There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them. This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question. Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons for users, what the system is doing and why for creators, including those undertaking the validation and certification of AI, the systems’ processes and input data for those deploying and operating the system, to understand processes and input data for an accident investigator, if accidents occur for regulators in the context of investigations for those in the legal process, to inform evidence and decision‐making for the public, to build confidence in the technology Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making. This principle also aims to ensure people have the ability to find out when an AI system is engaging with them (regardless of the level of impact), and are able to obtain a reasonable disclosure regarding the AI system.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· 1.3. Transparency and explainability

AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: i. to foster a general understanding of AI systems; ii. to make stakeholders aware of their interactions with AI systems, including in the workplace; iii. to enable those affected by an AI system to understand the outcome; and, iv. to enable those adversely affected by an AI system to challenge its outcome based on plain and easy to understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

· 4. Governance of AI Autonomy (Human oversight)

The correct approach to assuring properties such as safety, accuracy, adaptability, privacy, explicability, compliance with the rule of law and ethical conformity heavily depends on specific details of the AI system, its area of application, its level of impact on individuals, communities or society and its level of autonomy. The level of autonomy results from the use case and the degree of sophistication needed for a task. All other things being equal, the greater degree of autonomy that is given to an AI system, the more extensive testing and stricter governance is required. It must be ensured that AI systems continue to behave as intended when feedback signals become sparser. Depending on the area of application and or the level of impact on individuals, communities or society of the AI system, different levels or instances of governance (incl. human oversight) will be necessary. This is relevant for a large number of AI applications, and more particularly for the use of AI to suggest or take decisions concerning individuals or communities (algorithmic decision support). Good governance of AI autonomy in this respect includes for instance more or earlier human intervention depending on the level of societal impact of the AI system. This also includes the predicament that a user of an AI system, particularly in a work or decision making environment, is allowed to deviate from a path or decision chosen or recommended by the AI system.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· 1.3. Transparency and explainability

AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: i. to foster a general understanding of AI systems, ii. to make stakeholders aware of their interactions with AI systems, including in the workplace, iii. to enable those affected by an AI system to understand the outcome, and, iv. to enable those adversely affected by an AI system to challenge its outcome based on plain and easy to understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

Published by The Organisation for Economic Co-operation and Development (OECD) in OECD Principles on Artificial Intelligence, May 22, 2019

1. Demand That AI Systems Are Transparent

A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did. In particular: A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity. B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why. C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny. D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood. E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being. F. Workers must be consulted on AI systems’ implementation, development and deployment. G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making. The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed. See Principle 2 below for operational solution.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017