Accountability

There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
Principle: Seeking Ground Rules for A.I.: The Recommendations, Mar 1, 2019

Published by New Work Summit, hosted by The New York Times

Related Principles

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary. They must consider the appropriate level of human control or oversight for the particular AI system or use case. AI systems that have a significant impact on an individual's rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

VII. Accountability

Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their implementation. Auditability of AI systems is key in this regard, as the assessment of AI systems by internal and external auditors, and the availability of such evaluation reports, strongly contributes to the trustworthiness of the technology. External auditability should especially be ensured in applications affecting fundamental rights, including safety critical applications. Potential negative impacts of AI systems should be identified, assessed, documented and minimised. The use of impact assessments facilitates this process. These assessments should be proportionate to the extent of the risks that the AI systems pose. Trade offs between the requirements – which are often unavoidable – should be addressed in a rational and methodological manner, and should be accounted for. Finally, when unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

(Preamble)

New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018

3. Accountability

Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.

Published by ACM US Public Policy Council (USACM) in Principles for Algorithmic Transparency and Accountability, Jan 12, 2017