2. AI developers are responsible for their projects and must take into account the impact that each project may have on society. We will work to avoid potentially harmful or abusive applications. As we develop and implement AI technologies, we will evaluate probable uses in light of the following factors: Purpose, nature and Impact.

Principle: Declaration Of Ethics For The Development And Use Of Artificial Intelligence (unofficial translation), Feb 8, 2019 (unconfirmed)

Published by IA Latam

Related Principles

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

· 1. Be socially beneficial.

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides. AI also enhances our ability to understand the meaning of content at scale. We will strive to make high quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non commercial basis.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· 7. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors: Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use Nature and uniqueness: whether we are making available technology that is unique or more generally available Scale: whether the use of this technology will have significant impact Nature of Google’s involvement: whether we are providing general purpose tools, integrating tools for customers, or developing custom solutions

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

AI Applications We Will Not Pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas: 1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. 2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. 3. Technologies that gather or use information for surveillance violating internationally accepted norms. 4. Technologies whose purpose contravenes widely accepted principles of international law and human rights. As our experience in this space deepens, this list may evolve.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017