9. Principle of accountability

Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users. [Comment] Developers are expected to fulfill their accountability for AI systems they have developed to gain users’ trust in AI systems. Specifically, it is encouraged that developers make efforts to provide users with the information that can help their choice and utilization of AI systems. In addition, in order to improve the acceptance of AI systems by the society including users, it is also encouraged that, taking into account the R&D principles (1) to (8) set forth in the Guidelines, developers make efforts: (a) to provide users et al. with both information and explanations about the technical characteristics of the AI systems they have developed; and (b) to gain active involvement of stakeholders (such as their feedback) in such manners as to hear various views through dialogues with diverse stakeholders. Moreover, it is advisable that developers make efforts to share the information and cooperate with providers et al. who offer services with the AI systems they have developed on their own.
Principle: AI R&D Principles, Jul 28, 2017

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Related Principles

2. Transparency

For cognitive systems to fulfill their world changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear: When and for what purposes AI is being applied in the cognitive solutions we develop and deploy. The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions. The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.

Published by IBM in Principles for the Cognitive Era, Jan 17, 2017

Ensure “Interpretability” of AI systems

Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices. Recommendations: Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident. Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

1. Principle of collaboration

Developers should pay attention to the interconnectivity and interoperability of AI systems. [Comment] Developers should give consideration to the interconnectivity and interoperability between the AI systems that they have developed and other AI systems, etc. with consideration of the diversity of AI systems so that: (a) the benefits of AI systems should increase through the sound progress of AI networking; and that (b) multiple developers’ efforts to control the risks should be coordinated well and operate effectively. For this, developers should pay attention to the followings: • To make efforts to cooperate to share relevant information which is effective in ensuring interconnectivity and interoperability. • To make efforts to develop AI systems conforming to international standards, if any. • To make efforts to address the standardization of data formats and the openness of interfaces and protocols including application programming interface (API). • To pay attention to risks of unintended events as a result of the interconnection or interoperations between AI systems that they have developed and other AI systems, etc. • To make efforts to promote open and fair treatment of license agreements for and their conditions of intellectual property rights, such as standard essential patents, contributing to ensuring the interconnectivity and interoperability between AI systems and other AI systems, etc., while taking into consideration the balance between the protection and the utilization with respect to intellectual property related to the development of AI. [Note] The interoperability and interconnectivity in this context expects that AI systems which developers have developed can be connected to information and communication networks, thereby can operate with other AI systems, etc. in mutually and appropriately harmonized manners.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

2. Stakeholder Engagement

In order to solve the challenges arising from use of AI while striving for better AI utilization, Sony will seriously consider the interests and concerns of various stakeholders including its customers and creators, and proactively advance a dialogue with related industries, organizations, academic communities and more. For this purpose, Sony will construct the appropriate channels for ensuring that the content and results of these discussions are provided to officers and employees, including researchers and developers, who are involved in the corresponding businesses, as well as for ensuring further engagement with its various stakeholders.

Published by Sony Group in Sony Group AI Ethics Guidelines, Sep 25, 2018