3. Organizations are accountable for the actions of AI systems, and should build systems that are auditable;

Principle: Seven principles on the use of AI systems in government, Jun 28, 2018 (unconfirmed)

Published by The Treasury Board Secretariat of Canada (TBS)

Related Principles

6. Accountability and Integrity

There needs to be human accountability and control in the design, development, and deployment of AI systems. Deployers should be accountable for decisions made by AI systems and for the compliance with applicable laws and respect for AI ethics and principles. AI actors9 should act with integrity throughout the AI system lifecycle when designing, developing, and deploying AI systems. Deployers of AI systems should ensure the proper functioning of AI systems and its compliance with applicable laws, internal AI governance policies and ethical principles. In the event of a malfunction or misuse of the AI system that results in negative outcomes, responsible individuals should act with integrity and implement mitigating actions to prevent similar incidents from happening in the future. To facilitate the allocation of responsibilities, organisations should adopt clear reporting structures for internal governance, setting out clearly the different kinds of roles and responsibilities for those involved in the AI system lifecycle. AI systems should also be designed, developed, and deployed with integrity – any errors or unethical outcomes should at minimum be documented and corrected to prevent harm to users upon deployment

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary. They must consider the appropriate level of human control or oversight for the particular AI system or use case. AI systems that have a significant impact on an individual's rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· We will make AI systems accountable

1. Accountability for the outcomes of an AI system lies not with the system itself but is apportioned between those who design, develop and deploy it 2. Developers should make efforts to mitigate the risks inherent in the systems they design 3. AI systems should have built in appeals procedures whereby users can challenge significant decisions 4. AI systems should be developed by diverse teams which include experts in the area in which the system will be deployed

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019

· We will make AI systems transparent

1. Developers should build systems whose failures can be traced and diagnosed 2. People should be told when significant decisions about them are being made by AI 3. Within the limits of privacy and the preservation of intellectual property, those who deploy AI systems should be transparent about the data and algorithms they use

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019