(d) The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.

Principle: Five guiding principles of American AI Initiative, Feb 11, 2019

Published by The White House, United States

Related Principles

· 6. Respect for (& Enhancement of) Human Autonomy

AI systems should be designed not only to uphold rights, values and principles, but also to protect citizens in all their diversity from governmental and private abuses made possible by AI technology, ensuring a fair distribution of the benefits created by AI technologies, protect and enhance a plurality of human values, and enhance self determination and autonomy of individual users and communities. AI products and services, possibly through "extreme" personalisation approaches, may steer individual choice by potentially manipulative "nudging". At the same time, people are increasingly willing and expected to delegate decisions and actions to machines (e.g. recommender systems, search engines, navigation systems, virtual coaches and personal assistants). Systems that are tasked to help the user, must provide explicit support to the user to promote her his own preferences, and set the limits for system intervention, ensuring that the overall wellbeing of the user as explicitly defined by the user her himself is central to system functionality.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

2. Artificial intelligence should operate on principles of intelligibility and fairness.

Companies and organisations need to improve the intelligibility of their AI systems. Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society. To ensure that our use of AI does not inadvertently prejudice the treatment of particular groups in society, we call for the Government to incentivise the development of new approaches to the auditing of datasets used in AI, and to encourage greater diversity in the training and recruitment of AI specialists.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

7. Principle of ethics

Developers should respect human dignity and individual autonomy in R&D of AI systems. [Comment] It is encouraged that, when developing AI systems that link with the human brain and body, developers pay particularly due consideration to respecting human dignity and individual autonomy, in light of discussions on bioethics, etc. It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems. It is advisable that developers take precautions to ensure that AI systems do not unduly infringe the value of humanity, based on the International Human Rights Law and the International Humanitarian Law.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

3. Human centric AI

AI should be at the service of society and generate tangible benefits for people. AI systems should always stay under human control and be driven by value based considerations. Telefónica is conscious of the fact that the implementation of AI in our products and services should in no way lead to a negative impact on human rights or the achievement of the UN’s Sustainable Development Goals. We are concerned about the potential use of AI for the creation or spreading of fake news, technology addiction, and the potential reinforcement of societal bias in algorithms in general. We commit to working towards avoiding these tendencies to the extent it is within our realm of control.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018

1. Public Trust in AI

AI is expected to have a positive impact across sectors of social and economic life, including employment, transportation, education, finance, healthcare, personal security, and manufacturing. At the same time, AI applications could pose risks to privacy, individual rights, autonomy, and civil liberties that must be carefully assessed and appropriately addressed. Its continued adoption and acceptance will depend significantly on public trust and validation. It is therefore important that the government’s regulatory and non regulatory approaches to AI promote reliable, robust, and trustworthy AI applications, which will contribute to public trust in AI. The appropriate regulatory or non regulatory response to privacy and other risks must necessarily depend on the nature of the risk presented and the appropriate mitigations.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020