Disclosure

Companies should clearly disclose to users what data is being collected and how it is being used.
Principle: Seeking Ground Rules for A.I.: The Recommendations, Mar 1, 2019

Published by New Work Summit, hosted by The New York Times

Related Principles

Transparency

As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

2. An A.I. system must clearly disclose that it is not human.

Published by Oren Etzioni, CEO of Allen Institute for Artificial Intelligence in Three Rules for Artificial Intelligence Systems, Sep 1, 2017

T Transparency

Traditionally, many organisations that have developed and now use AI algorithms do not allow public scrutiny as the underlying programming (the source code) is proprietary, kept from public view. Being transparent and opening source material in computer science is an important step. It helps the public to understand better how AI works and therefore it improves trust and prevents unjustified fears.

Published by Institute of Business Ethics (IBE) in IBE interactive framework of fundamental values and principles for the use of Artificial Intelligence (AI) in business, Jan 11, 2018

4. Principle 4 — Transparency

Issue: How can we ensure that A IS are transparent? [Candidate Recommendation] Develop new standards* that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self assessing transparency during development and suggest mechanisms for improving transparency. (The mechanisms by which transparency is provided will vary significantly, for instance 1) for users of care or domestic robots, a why did you do that button which, when pressed, causes the robot to explain the action it just took, 2) for validation or certification agencies, the algorithms underlying the A IS and how they have been verified, and 3) for accident investigators, secure storage of sensor and internal state data, comparable to a flight data recorder or black box.) *Note that IEEE Standards Working Group P7001™ has been set up in response to this recommendation.

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

6. Auditability

Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.

Published by ACM US Public Policy Council (USACM) in Principles for Algorithmic Transparency and Accountability, Jan 12, 2017