Notice and Explanation:

You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
Principle: Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age, Oct 4, 2022

Published by OSTP

Related Principles

· Transparency

As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

I Integrity

As the saying has it, integrity is doing the right thing even when nobody's watching. In the context of AI, we should ensure that it is used only for its intended purpose, even when there is no means to enforce this. When designing or selling an AI system, it is important to ensure that the end user will respect the agreed uses of the technology.

Published by Institute of Business Ethics (IBE) in IBE interactive framework of fundamental values and principles for the use of Artificial Intelligence (AI) in business, Jan 11, 2018

· 2. A.I. must be transparent

We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines. People should have an understanding of how the technology sees and analyzes the world. Ethics and design go hand in hand.

Published by Satya Nadella, CEO of Microsoft in 10 AI rules, Jun 28, 2016

1. Transparent and explainable

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used. When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences. Why it matters Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it. Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups. For more on this, please consult the Transparency Guidelines.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

3.1 Explainability and verifiability

One of the basic characteristics of human consciousness is that it perceives the environment, seeks answers to questions. i.e. explanations of why and how something is or is not. That trait influenced the evolution of man and the development of science, and therefore artificial intelligence. Man's need to understand and make things clear to him found its foothold in this principle. Clarity in the context of these Guidelines means that all processes: development, testing, commissioning, system monitoring and shutdown must be transparent. The purpose and capabilities of the artificial intelligence system itself must be explainable, especially the decisions (recommendations) which it brings (to the extent that it is expedient) to all who are affected by the System (directly or indirectly). lf certain results of the System's work cannot be explained, it is necessary to mark them as a system with a "black box" model. Verifiability is a complementary element of this principle, which ensures that the System can check in all processes, ie. during the entire life cycle. Verifiability includes the actions and procedures of checking artificial intelligence systems during testing and implementation, as well as checking the short term and long term impact that such a system has on humans.

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023