4. Principle 4 — Transparency

Issue: How can we ensure that A IS are transparent? [Candidate Recommendation] Develop new standards* that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self assessing transparency during development and suggest mechanisms for improving transparency. (The mechanisms by which transparency is provided will vary significantly, for instance 1) for users of care or domestic robots, a why did you do that button which, when pressed, causes the robot to explain the action it just took, 2) for validation or certification agencies, the algorithms underlying the A IS and how they have been verified, and 3) for accident investigators, secure storage of sensor and internal state data, comparable to a flight data recorder or black box.) *Note that IEEE Standards Working Group P7001™ has been set up in response to this recommendation.
Principle: Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Related Principles

Transparency and explainability

There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them. This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question. Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons for users, what the system is doing and why for creators, including those undertaking the validation and certification of AI, the systems’ processes and input data for those deploying and operating the system, to understand processes and input data for an accident investigator, if accidents occur for regulators in the context of investigations for those in the legal process, to inform evidence and decision‐making for the public, to build confidence in the technology Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making. This principle also aims to ensure people have the ability to find out when an AI system is engaging with them (regardless of the level of impact), and are able to obtain a reasonable disclosure regarding the AI system.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· Transparency

As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

· 8. Robustness

Trustworthy AI requires that algorithms are secure, reliable as well as robust enough to deal with errors or inconsistencies during the design, development, execution, deployment and use phase of the AI system, and to adequately cope with erroneous outcomes. Reliability & Reproducibility. Trustworthiness requires that the accuracy of results can be confirmed and reproduced by independent evaluation. However, the complexity, non determinism and opacity of many AI systems, together with sensitivity to training model building conditions, can make it difficult to reproduce results. Currently there is an increased awareness within the AI research community that reproducibility is a critical requirement in the field. Reproducibility is essential to guarantee that results are consistent across different situations, computational frameworks and input data. The lack of reproducibility can lead to unintended discrimination in AI decisions. Accuracy. Accuracy pertains to an AI’s confidence and ability to correctly classify information into the correct categories, or its ability to make correct predictions, recommendations, or decisions based on data or models. An explicit and well formed development and evaluation process can support, mitigate and correct unintended risks. Resilience to Attack. AI systems, like all software systems, can include vulnerabilities that can allow them to be exploited by adversaries. Hacking is an important case of intentional harm, by which the system will purposefully follow a different course of action than its original purpose. If an AI system is attacked, the data as well as system behaviour can be changed, leading the system to make different decisions, or causing the system to shut down altogether. Systems and or data can also become corrupted, by malicious intention or by exposure to unexpected situations. Poor governance, by which it becomes possible to intentionally or unintentionally tamper with the data, or grant access to the algorithms to unauthorised entities, can also result in discrimination, erroneous decisions, or even physical harm. Fall back plan. A secure AI has safeguards that enable a fall back plan in case of problems with the AI system. In some cases this can mean that the AI system switches from statistical to rule based procedure, in other cases it means that the system asks for a human operator before continuing the action.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

Responsible Deployment

Principle: The capacity of an AI agent to act autonomously, and to adapt its behavior over time without human direction, calls for significant safety checks before deployment, and ongoing monitoring. Recommendations: Humans must be in control: Any autonomous system must allow for a human to interrupt an activity or shutdown the system (an “off switch”). There may also be a need to incorporate human checks on new decision making strategies in AI system design, especially where the risk to human life and safety is great. Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI agent’s safe interaction with its environment (digital or physical) and that it functions as intended. Autonomous systems should be monitored while in operation, and updated or corrected as needed. Privacy is key: AI systems must be data responsible. They should use only what they need and delete it when it is no longer needed (“data minimization”). They should encrypt data in transit and at rest, and restrict access to authorized persons (“access control”). AI systems should only collect, use, share and store data in accordance with privacy and personal data laws and best practices. Think before you act: Careful thought should be given to the instructions and data provided to AI systems. AI systems should not be trained with data that is biased, inaccurate, incomplete or misleading. If they are connected, they must be secured: AI systems that are connected to the Internet should be secured not only for their protection, but also to protect the Internet from malfunctioning or malware infected AI systems that could become the next generation of botnets. High standards of device, system and network security should be applied. Responsible disclosure: Security researchers acting in good faith should be able to responsibly test the security of AI systems without fear of prosecution or other legal action. At the same time, researchers and others who discover security vulnerabilities or other design flaws should responsibly disclose their findings to those who are in the best position to fix the problem.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

1. Demand That AI Systems Are Transparent

A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did. In particular: A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity. B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why. C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny. D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood. E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being. F. Workers must be consulted on AI systems’ implementation, development and deployment. G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making. The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed. See Principle 2 below for operational solution.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017