3. Manufacturers and operators of AI shall be accountable.

Accountability means the ability to assign responsibility for the effects caused by AI or its operators.
Principle: Principles for the Governance of AI, Oct 3, 2017 (unconfirmed)

Published by The Future Society, Science, Law and Society (SLS) Initiative

Related Principles

1. Principle of proper utilization

Users should make efforts to utilize AI systems or AI services in a proper scope and manner, under the proper assignment of roles between humans and AI systems, or among users. [Main points to discuss] A) Utilization in the proper scope and manner On the basis of the provision of information and explanation from developers, etc. and with consideration of social contexts and circumstances, users may be expected to use AI in the proper scope and manner. In addition, users may be expected to recognize benefits and risks, understand proper uses, acquire necessary knowledge and skills and so on before using AI, according to the characteristics, usage situations, etc. of AI. Furthermore, users may be expected to check regularly whether they use AI in an appropriate scope and manner. B) Proper balance of benefits and risks of AI AI service providers and business users may be expected to take into consideration proper balance between benefits and risks of AI, including the consideration of the active use of AI for productivity and work efficiency improvements, after appropriately assessing risks of AI. C) Updates of AI software and inspections repairs, etc. of AI Through the process of utilization, users may be expected to make efforts to update AI software and perform inspections, repairs, etc. of AI in order to improve the function of AI and to mitigate risks. D) Human Intervention Regarding the judgment made by AI, in cases where it is necessary and possible (e.g., medical care using AI), humans may be expected to make decisions as to whether to use the judgments of AI, how to use it etc. In those cases, what can be considered as criteria for the necessity of human intervention? In the utilization of AI that operates through actuators, etc., in the case where it is planned to shift to human operation under certain conditions, what kind of matters are expected to be paid attention to? [Points of view as criteria (example)] • The nature of the rights and interests of indirect users, et al., and their intents, affected by the judgments of AI. • The degree of reliability of the judgment of AI (compared with reliability of human judgment). • Allowable time necessary for human judgment • Ability expected to be possessed by users E) Role assignments among users With consideration of the volume of capabilities and knowledge on AI that each user is expected to have and ease of implementing necessary measures, users may be expected to play such roles as seems to be appropriate and also to bear the responsibility. F) Cooperation among stakeholders Users and data providers may be expected to cooperate with stakeholders and to work on preventive or remedial measures (including information sharing, stopping and restoration of AI, elucidation of causes, measures to prevent recurrence, etc.) in accordance with the nature, conditions, etc. of damages caused by accidents, security breaches, privacy infringement, etc. that may occur in the future or have occurred through the use of AI. What is expected reasonable from a users point of view to ensure the above effectiveness?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

3. Principle of collaboration

AI service providers, business users, and data providers should pay attention to the collaboration of AI systems or AI services. Users should take it into consideration that risks might occur and even be amplified when AI systems are to be networked. [Main points to discuss] A) Attention to the interconnectivity and interoperability of AI systems AI network service providers may be expected to pay attention to the interconnectivity and interoperability of AI with consideration of the characteristics of AI to be used and its usage, in order to promote the benefits of AI through the sound progress of AI networking. B) Address to the standardization of data formats, protocols, etc. AI service providers and business users may be expected to address the standardization of data formats, protocols, etc. in order to promote cooperation among AI systems and between AI systems and other systems, etc. Also, data providers may be expected to address the standardization of data formats. C) Attention to problems caused and amplified by AI networking Although it is expected that collaboration of AI promotes the benefits, users may be expected to pay attention to the possibility that risks (e.g. the risk of loss of control by interconnecting or collaborating their AI systems with other AI systems, etc. through the Internet or other network) might be caused or amplified by AI networking. [Problems (examples) over risks that might become realized and amplified by AI networking] • Risks that one AI system's trouble, etc. spreads to the entire system. • Risks of failures in the cooperation and adjustment between AI systems. • Risks of failures in verifying the judgment and the decision making of AI (risks of failure to analyze the interactions between AI systems because the interactions become complicated). • Risks that the influence of a small number of AI becomes too strong (risks of enterprises and individuals suffering disadvantage by the judgment of a few AI systems). • Risks of the infringement of privacy as a result of information sharing across fields and the concentration of information to one specific AI. • Risks of unexpected actions of AI.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

10.Principle of accountability

AI service providers and business users should make efforts to fulfill their accountability to the stakeholders including consumer users and indirect users. [Main points to discuss] A) Efforts to fulfill accountability In light of the characteristics of AI to be used and its purpose, etc., AI service providers and business users may be expected to make efforts to establish appropriate accountability to consumer users, indirect users, and third parties affected by the use of AI, to gain enough trust in AI from people and society. B) Notification and publication of usage policy on AI systems or AI services AI service providers and business users may be expected to notify or announce the usage policy on AI (the fact that they provide AI services, the scope and manner of proper AI utilization, the risks associated with the utilization, and the establishment of a consultation desk) in order to enable consumer users and indirect users to recognize properly the usage of AI. In light of the characteristics of the technologies to be used and their usage, we have to focus on which cases will lead to the usage policy is expected to be notified or announced as well as what content is expected to be included in the usage policy.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

· 3. HUMANS ARE ALWAYS RESPONSIBILE FOR THE CONSEQUENCES OF AI SYSTEMS APPLICATION

3.1. Supervision. AI Actors should ensure comprehensive human supervision of any AI system in the scope and order depending on the purpose of this AI system, i.a., for instance, record significant human decisions at all stages of the AI systems’ life cycle or make registration records of the operation of AI systems. AI Actors should also ensure transparency of AI systems use, the opportunity of cancellation by a person and (or) prevention of socially and legally significant decisions and actions of AI systems at any stage of their life cycle where it is reasonably applicable. 3.2. Responsibility. AI Actors should not allow the transfer of the right to responsible moral choice to AI systems or delegate the responsibility for the consequences of decision making to AI systems. A person (an individual or legal entity recognized as the subject of responsibility in accordance with the existing national legislation) must always be responsible for all consequences caused by the operation of AI systems. AI Actors are encouraged to take all measures to determine the responsibility of specific participants in the life cycle of AI systems, taking into account each participant’s role and the specifics of each stage.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

Second principle: Responsibility

Human responsibility for AI enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles. The increased speed, complexity and automation of AI enabled systems may complicate our understanding of pre existing concepts of human control, responsibility and accountability. This may occur through the sorting and filtering of information presented to decision makers, the automation of previously human led processes, or processes by which AI enabled systems learn and evolve after their initial deployment. Nevertheless, as unique moral agents, humans must always be responsible for the ethical use of AI in Defence. Human responsibility for the use of AI enabled systems in Defence must be underpinned by a clear and consistent articulation of the means by which human control is exercised, and the nature and limitations of that control. While the level of human control will vary according to the context and capabilities of each AI enabled system, the ability to exercise human judgement over their outcomes is essential. Irrespective of the use case, Responsibility for each element of an AI enabled system, and an articulation of risk ownership, must be clearly defined from development, through deployment – including redeployment in new contexts – to decommissioning. This includes cases where systems are complex amalgamations of AI and non AI components, from multiple different suppliers. In this way, certain aspects of responsibility may reach beyond the team deploying a particular system, to other functions within the MOD, or beyond, to the third parties which build or integrate AI enabled systems for Defence. Collectively, these articulations of human control, responsibility and risk ownership must enable clear accountability for the outcomes of any AI enabled system in Defence. There must be no deployment or use without clear lines of responsibility and accountability, which should not be accepted by the designated duty holder unless they are satisfied that they can exercise control commensurate with the various risks.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022