7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.

Principle: Partnership on AI: Tenets, Sep 28, 2016 (unconfirmed)

Published by Partnership on AI

Related Principles

Transparency and explainability

There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them. This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question. Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons for users, what the system is doing and why for creators, including those undertaking the validation and certification of AI, the systems’ processes and input data for those deploying and operating the system, to understand processes and input data for an accident investigator, if accidents occur for regulators in the context of investigations for those in the legal process, to inform evidence and decision‐making for the public, to build confidence in the technology Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making. This principle also aims to ensure people have the ability to find out when an AI system is engaging with them (regardless of the level of impact), and are able to obtain a reasonable disclosure regarding the AI system.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

IV. Transparency

The traceability of AI systems should be ensured; it is important to log and document both the decisions made by the systems, as well as the entire process (including a description of data gathering and labelling, and a description of the algorithm used) that yielded the decisions. Linked to this, explainability of the algorithmic decision making process, adapted to the persons involved, should be provided to the extent possible. Ongoing research to develop explainability mechanisms should be pursued. In addition, explanations of the degree to which an AI system influences and shapes the organisational decision making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency). Finally, it is important to adequately communicate the AI system’s capabilities and limitations to the different stakeholders involved in a manner appropriate to the use case at hand. Moreover, AI systems should be identifiable as such, ensuring that users know they are interacting with an AI system and which persons are responsible for it.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

· Build and Validate:

1 Transparency in AI is thought about from two perspectives, the first is the process behind it (the design and implementation practices that lead to an algorithmically supported outcome) and the second is in terms of its product (the content and justification of that outcome). Algorithms should be developed in a transparent way to ensure that input transparency is evident and explainable to the end users of the AI system to be able to provide evidence and information on the data used to process the decisions that have been processed. 2 Transparent and explainable algorithms ensure that stakeholders affected by AI systems, both individuals and communities, are fully informed when an outcome is processed by the AI system by providing the opportunity to request explanatory information from the AI system owner. This enables the identification of the AI decision and its respective analysis which facilitates its auditability as well as its explainability. 3 If the AI system is built by a third party, AI system owners should make sure that an AI Ethics due diligence is carried out and all the documentation are accessible and traceable before procurement or sign off.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

2. Transparent and explainable AI

We will be explicit about the kind of personal and or non personal data the AI systems uses as well as about the purpose the data is used for. When people directly interact with an AI system, we will be transparent to the users that this is the case. When AI systems take, or support, decisions we take the technical and organizational measures required to guarantee a level of understanding adequate to the application area. In any case, if the decisions significantly affect people's lives, we will ensure we understand the logic behind the conclusions. This will also apply when we use third party technology.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018

1. Demand That AI Systems Are Transparent

A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did. In particular: A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity. B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why. C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny. D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood. E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being. F. Workers must be consulted on AI systems’ implementation, development and deployment. G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making. The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed. See Principle 2 below for operational solution.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017