Objectives for AI Applications

We will assess AI applications in view of the following objectives. We believe that AI should:
Principle: Artificial Intelligence at Google: Our Principles, Jun 7, 2018

Published by Google

Related Principles

6. We set the framework.

Our AI solutions are developed and enhanced on grounds of deep analysis and evaluation. They are transparent, auditable, fair, and fully documented. We consciously initiate the AI’s development for the best possible outcome. The essential paradigm for our AI systems’ impact analysis is “privacy und security by design”. This is accompanied e.g. by risks and chances scenarios or reliable disaster scenarios. We take great care in the initial algorithm of our own AI solutions to prevent so called “Black Boxes” and to make sure that our systems shall not unintentionally harm the users

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

(Preamble)

We reaffirm that the use of AI must take place within the context of the existing DoD ethical framework. Building on this foundation, we propose the following principles, which are more specific to AI, and note that they apply to both combat and non combat systems. AI is a rapidly developing field, and no organization that currently develops or fields AI systems or espouses AI ethics principles can claim to have solved all the challenges embedded in the following principles. However, the Department should set the goal that its use of AI systems is:

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States in AI Ethics Principles for DoD, Oct 31, 2019

1. Principle of proper utilization

Users should make efforts to utilize AI systems or AI services in a proper scope and manner, under the proper assignment of roles between humans and AI systems, or among users. [Main points to discuss] A) Utilization in the proper scope and manner On the basis of the provision of information and explanation from developers, etc. and with consideration of social contexts and circumstances, users may be expected to use AI in the proper scope and manner. In addition, users may be expected to recognize benefits and risks, understand proper uses, acquire necessary knowledge and skills and so on before using AI, according to the characteristics, usage situations, etc. of AI. Furthermore, users may be expected to check regularly whether they use AI in an appropriate scope and manner. B) Proper balance of benefits and risks of AI AI service providers and business users may be expected to take into consideration proper balance between benefits and risks of AI, including the consideration of the active use of AI for productivity and work efficiency improvements, after appropriately assessing risks of AI. C) Updates of AI software and inspections repairs, etc. of AI Through the process of utilization, users may be expected to make efforts to update AI software and perform inspections, repairs, etc. of AI in order to improve the function of AI and to mitigate risks. D) Human Intervention Regarding the judgment made by AI, in cases where it is necessary and possible (e.g., medical care using AI), humans may be expected to make decisions as to whether to use the judgments of AI, how to use it etc. In those cases, what can be considered as criteria for the necessity of human intervention? In the utilization of AI that operates through actuators, etc., in the case where it is planned to shift to human operation under certain conditions, what kind of matters are expected to be paid attention to? [Points of view as criteria (example)] • The nature of the rights and interests of indirect users, et al., and their intents, affected by the judgments of AI. • The degree of reliability of the judgment of AI (compared with reliability of human judgment). • Allowable time necessary for human judgment • Ability expected to be possessed by users E) Role assignments among users With consideration of the volume of capabilities and knowledge on AI that each user is expected to have and ease of implementing necessary measures, users may be expected to play such roles as seems to be appropriate and also to bear the responsibility. F) Cooperation among stakeholders Users and data providers may be expected to cooperate with stakeholders and to work on preventive or remedial measures (including information sharing, stopping and restoration of AI, elucidation of causes, measures to prevent recurrence, etc.) in accordance with the nature, conditions, etc. of damages caused by accidents, security breaches, privacy infringement, etc. that may occur in the future or have occurred through the use of AI. What is expected reasonable from a users point of view to ensure the above effectiveness?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

1. People first approach

We will use data and AI responsibly and for the good of our customers. We will define the objectives guiding our use of AI clearly and refine them if necessary based on changed data, technical possibilities and the working environment.

Published by OP Financial Group in OP Financial Group’s ethical guidelines for artificial intelligence, 2018 (unconfirmed)

2. Transparent and explainable AI

We will be explicit about the kind of personal and or non personal data the AI systems uses as well as about the purpose the data is used for. When people directly interact with an AI system, we will be transparent to the users that this is the case. When AI systems take, or support, decisions we take the technical and organizational measures required to guarantee a level of understanding adequate to the application area. In any case, if the decisions significantly affect people's lives, we will ensure we understand the logic behind the conclusions. This will also apply when we use third party technology.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018