3. Responsibility:

those who design and deploy the use of AI must proceed with responsibility and transparency;
Principle: Rome Call for AI Ethics, Feb 28, 2020

Published by The Pontifical Academy for Life, Microsoft, IBM, FAO, the Italia Government

Related Principles


Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary. They must consider the appropriate level of human control or oversight for the particular AI system or use case. AI systems that have a significant impact on an individual's rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

9) Responsibility

Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

1.1 Responsible Design and Deployment

We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

5. Security

As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. In the development and use of AI, members of the JSAI will always pay attention to safety, controllability, and required confidentiality while ensuring that users of AI are provided appropriate and sufficient information.

Published by The Japanese Society for Artificial Intelligence (JSAI) in The Japanese Society for Artificial Intelligence Ethical Guidelines, Feb 28, 2017

9. Principle of accountability

Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users. [Comment] Developers are expected to fulfill their accountability for AI systems they have developed to gain users’ trust in AI systems. Specifically, it is encouraged that developers make efforts to provide users with the information that can help their choice and utilization of AI systems. In addition, in order to improve the acceptance of AI systems by the society including users, it is also encouraged that, taking into account the R&D principles (1) to (8) set forth in the Guidelines, developers make efforts: (a) to provide users et al. with both information and explanations about the technical characteristics of the AI systems they have developed; and (b) to gain active involvement of stakeholders (such as their feedback) in such manners as to hear various views through dialogues with diverse stakeholders. Moreover, it is advisable that developers make efforts to share the information and cooperate with providers et al. who offer services with the AI systems they have developed on their own.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017