10. Responsibility, accountability and transparency

a. Build trust by ensuring that designers and operators are responsible and accountable for their systems, applications and algorithms, and to ensure that such systems, applications and algorithms operate in a transparent and fair manner. b. To make available externally visible and impartial avenues of redress for adverse individual or societal effects of an algorithmic decision system, and to designate a role to a person or office who is responsible for the timely remedy of such issues. c. Incorporate downstream measures and processes for users or stakeholders to verify how and when AI technology is being applied. d. To keep detailed records of design processes and decision making.
Principle: A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

Published by Personal Data Protection Commission (PDPC), Singapore

Related Principles

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary. They must consider the appropriate level of human control or oversight for the particular AI system or use case. AI systems that have a significant impact on an individual's rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

(Preamble)

Automated decision making algorithms are now used throughout industry and government, underpinning many processes from dynamic pricing to employment practices to criminal sentencing. Given that such algorithmically informed decisions have the potential for significant societal impact, the goal of this document is to help developers and product managers design and implement algorithmic systems in publicly accountable ways. Accountability in this context includes an obligation to report, explain, or justify algorithmic decision making as well as mitigate any negative social impacts or potential harms. We begin by outlining five equally important guiding principles that follow from this premise: Algorithms and the data that drive them are designed and created by people There is always a human ultimately responsible for decisions made or informed by an algorithm. "The algorithm did it" is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences, including from machine learning processes.

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) in Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

3. Principle 3 — Accountability

Issue: How can we assure that designers, manufacturers, owners, and operators of A IS are responsible and accountable? [Candidate Recommendations] To best address issues of responsibility and accountability: 1. Legislatures courts should clarify issues of responsibility, culpability, liability, and accountability for A IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations). 2. Designers and developers of A IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A IS. 3. Multi stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A IS oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.). 4. Systems for registration and record keeping should be created so that it is always possible to find out who is legally responsible for a particular A IS. Manufacturers operators owners of A IS should register key, high level parameters, including: • Intended use • Training data training environment (if applicable) • Sensors real world data sources • Algorithms • Process graphs • Model features (at various levels) • User interfaces • Actuators outputs • Optimization goal loss function reward function

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

Ensure “Interpretability” of AI systems

Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices. Recommendations: Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident. Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

3. Scientific Integrity and Information Quality

The government’s regulatory and non regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance. Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should develop regulatory approaches to AI in a manner that both informs policy decisions and fosters public trust in AI. Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020