2. be transparent about how and when we are using AI, starting with a clear user need and public benefit

Principle: Responsible use of artificial intelligence (AI): Our guiding principles, 2019 (unconfirmed)

Published by Government of Canada

Related Principles

4. We are transparent.

In no case we hide it when the customer’s counterpart is an AI. And, we are transparent about how we use customer data. As Deutsche Telekom, we always have the customer’s trust in mind – trust is what we stand for. We are acting openly to our customers. It is obvious to our customers that they are interacting with an AI when they do. In addition, we make clear, how and to which extent they can choose the way of further processing their personal data.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

· 2) Research Funding

Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people’s resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

L Learning

To maximise the potential of AI, people need to learn how it works and what are the most efficient and effective ways to use it. Employees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI and they need to be provided with the skills to do so.

Published by Institute of Business Ethics (IBE) in IBE interactive framework of fundamental values and principles for the use of Artificial Intelligence (AI) in business, Jan 11, 2018

3. New technology, including AI systems, must be transparent and explainable

For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.

Published by IBM in Principles for Trust and Transparency, May 30, 2018

1. Transparent and explainable

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used. When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences. Why it matters Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it. Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups. For more on this, please consult the Transparency Guidelines.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023