Transparency

To the greatest extent feasible, the technical basis of the particular decisions made by an A IS should be discoverable.
Principle: Ethical Aspects of Autonomous and Intelligent Systems, Jun 24, 2019

Published by IEEE

Related Principles

Transparency Principle

The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by Center for AI and Digital Policy in Universal Guidelines for AI, Oct, 2018

IV. Transparency

The traceability of AI systems should be ensured; it is important to log and document both the decisions made by the systems, as well as the entire process (including a description of data gathering and labelling, and a description of the algorithm used) that yielded the decisions. Linked to this, explainability of the algorithmic decision making process, adapted to the persons involved, should be provided to the extent possible. Ongoing research to develop explainability mechanisms should be pursued. In addition, explanations of the degree to which an AI system influences and shapes the organisational decision making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency). Finally, it is important to adequately communicate the AI system’s capabilities and limitations to the different stakeholders involved in a manner appropriate to the use case at hand. Moreover, AI systems should be identifiable as such, ensuring that users know they are interacting with an AI system and which persons are responsible for it.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

· Plan and Design:

1 When designing a transparent and trusted AI system, it is vital to ensure that stakeholders affected by AI systems are fully aware and informed of how outcomes are processed. They should further be given access to and an explanation of the rationale for decisions made by the AI technology in an understandable and contextual manner. Decisions should be traceable. AI system owners must define the level of transparency for different stakeholders on the technology based on data privacy, sensitivity, and authorization of the stakeholders. 2 The AI system should be designed to include an information section in the platform to give an overview of the AI model decisions as part of the overall transparency application of the technology. Information sharing as a sub principle should be adhered to with end users and stakeholders of the AI system upon request or open to the public, depending on the nature of the AI system and target market. The model should establish a process mechanism to log and address issues and complaints that arise to be able to resolve them in a transparent and explainable manner. Prepare Input Data: 1 The data sets and the processes that yield the AI system’s decision should be documented to the best possible standard to allow for traceability and an increase in transparency. 2 The data sets should be assessed in the context of their accuracy, suitability, validity, and source. This has a direct effect on the training and implementation of these systems since the criteria for the data’s organization, and structuring must be transparent and explainable in their acquisition and collection adhering to data privacy regulations and intellectual property standards and controls.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

1. Right to Transparency.

All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. [Explanatory Memorandum] The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018

1. Demand That AI Systems Are Transparent

A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did. In particular: A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity. B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why. C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny. D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood. E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being. F. Workers must be consulted on AI systems’ implementation, development and deployment. G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making. The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed. See Principle 2 below for operational solution.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017