3. provide meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions

Principle: Responsible use of artificial intelligence (AI): Our guiding principles, 2019 (unconfirmed)

Published by Government of Canada

Related Principles

3. Explainability

o We strive to develop ML solutions that are explainable and direct. Our ML data discovery and data usage models are designed with understanding as a key attribute, measured against an expressed desired outcome. For example, if the ML model is to provide an employee specific learning or training recommendations, we actively measure both the selection of those recommendations as well as the outcome or results of the learning module for that individual. In turn, we provide supporting information to outline the results of the recommendation’s effectiveness. ADP is also committed to providing individuals with the right to question an automated decision, and to require a human review of the decision.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

Ensure “Interpretability” of AI systems

Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices. Recommendations: Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident. Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

8. Principle of user assistance

Developers should take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners. [Comment] In order to support users of AI systems, it is recommended that developers pay attention to the followings: ● To make efforts to make available interfaces that provide in a timely and appropriate manner the information that can help users’ decisions and are easy to use for them. ● To make efforts to give consideration to make available functions that provide users with opportunities for choice in a timely and appropriate manner (e.g., default settings, easy to understand options, feedbacks, emergency warnings, handling of errors, etc.). And ● To make efforts to take measures to make AI systems easier to use for socially vulnerable people such as universal design. In addition, it is recommended that developers make efforts to provide users with appropriate information considering the possibility of changes in outputs or programs as a result of learning or other methods of AI systems.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

· 2) Humanistic approach:

Humanistic approach: Artificial intelligence should empower users to make their own decisions. We are committed to providing transparent, understandable decision interpretations and interactive tools, allowing users to join, monitor or involve in the decision making process.

Published by Youth Work Committee of Shanghai Computer Society in Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019

6. Pursuit of Transparency

During the planning and design stages for its products and services that utilize AI, Sony will strive to introduce methods of capturing the reasoning behind the decisions made by AI utilized in said products and services. Additionally, it will endeavor to provide intelligible explanations and information to customers about the possible impact of using these products and services.

Published by Sony Group in Sony Group AI Ethics Guidelines, Sep 25, 2018