· 1.5. Accountability

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.
Principle: OECD Principles on Artificial Intelligence, May 22, 2019

Published by The Organisation for Economic Co-operation and Development (OECD)

Related Principles

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary. They must consider the appropriate level of human control or oversight for the particular AI system or use case. AI systems that have a significant impact on an individual's rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· 1.5. Accountability

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

1. Accountability

Ensure that AI actors are responsible and accountable for the proper functioning of AI systems and for the respect of AI ethics and principles, based on their roles, the context, and consistency with the state of art.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

· 3. HUMANS ARE ALWAYS RESPONSIBILITY FOR THE CONSEQUENCES OF THE APPLICATION OF AN AIS

3.1. Supervision. AI Actors should provide comprehensive human supervision of any AIS to the extent and manner depending on the purpose of the AIS, including, for example, recording significant human decisions at all stages of the AIS life cycle or making provisions for the registration of the work of the AIS. They should also ensure the transparency of AIS use, including the possibility of cancellation by a person and (or) the prevention of making socially and legally significant decisions and actions by the AIS at any stage in its life cycle, where reasonably applicable. 3.2. Responsibility. AI Actors should not allow the transfer of rights of responsible moral choice to the AIS or delegate responsibility for the consequences of the AIS’s decision making. A person (an individual or legal entity recognized as the subject of responsibility in accordance with the legislation in force of the Russian Federation) must always be responsible for the consequences of the work of the AI Actors are encouraged to take all measures to determine the responsibilities of specific participants in the life cycle of the AIS, taking into account each participant’s role and the specifics of each stage.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 3. HUMANS ARE ALWAYS RESPONSIBILE FOR THE CONSEQUENCES OF AI SYSTEMS APPLICATION

3.1. Supervision. AI Actors should ensure comprehensive human supervision of any AI system in the scope and order depending on the purpose of this AI system, i.a., for instance, record significant human decisions at all stages of the AI systems’ life cycle or make registration records of the operation of AI systems. AI Actors should also ensure transparency of AI systems use, the opportunity of cancellation by a person and (or) prevention of socially and legally significant decisions and actions of AI systems at any stage of their life cycle where it is reasonably applicable. 3.2. Responsibility. AI Actors should not allow the transfer of the right to responsible moral choice to AI systems or delegate the responsibility for the consequences of decision making to AI systems. A person (an individual or legal entity recognized as the subject of responsibility in accordance with the existing national legislation) must always be responsible for all consequences caused by the operation of AI systems. AI Actors are encouraged to take all measures to determine the responsibility of specific participants in the life cycle of AI systems, taking into account each participant’s role and the specifics of each stage.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)