Trust

Organizations should have internal processes to self regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
Principle: Seeking Ground Rules for A.I.: The Recommendations, Mar 1, 2019

Published by New Work Summit, hosted by The New York Times

Related Principles

· Accountability

People and corporations who design and deploy AI systems must be accountable for how their systems are designed and operated. The development of AI must be responsible, safe and useful. AI must maintain the legal status of tools, and legal persons need to retain control over, and responsibility for, these tools at all times. Workers, job applicants and ex workers must also have the “right of explanation” when AI systems are used in human resource procedures, such as recruitment, promotion or dismissal. They should also be able to appeal decisions by AI and have them reviewed by a human.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

• Require Accountability for Ethical Design and Implementation

The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. [Recommendations] • Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles. • Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.

Published by Intel in AI public policy principles, Oct 18, 2017

6. Shared Responsibility

AI developers, users and other related stakeholders should have a high sense of social responsibility and self discipline, and should strictly abide by laws, regulations, ethical principles, technical standards and social norms. AI accountability mechanisms should be established to clarify the responsibilities of researchers, developers, users, and relevant parties. Users of AI products and services and other stakeholders should be informed of the potential risks and impacts in advance. Using AI for illegal activities should be strictly prohibited.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

4. We strive for transparency and integrity in all that we do

Our systems are held to specific standards in accordance with their level of technical ability and intended usage. Their input, capabilities, intended purpose, and limitations will be communicated clearly to our customers, and we provide means for oversight and control by customers and users. They are, and will always remain, in control of the deployment of our products. We actively support industry collaboration and will conduct research to further system transparency. We operate with integrity through our code of business conduct, our internal AI Ethics Steering Committee, and our external AI Ethics Advisory Panel.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

(Preamble)

New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018