Transparent and Accountable.

We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.
Principle: Principles of Artificial Intelligence Ethics for the Intelligence Community, Jul 23, 2020

Published by Intelligence Community (IC), United States

Related Principles

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. This principle aims to acknowledge the relevant organisations' and individuals’ responsibility for the outcomes of the AI systems that they design, develop, deploy and operate. The application of legal principles regarding accountability for AI systems is still developing. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary. They must consider the appropriate level of human control or oversight for the particular AI system or use case. AI systems that have a significant impact on an individual's rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

Design for human control, accountability, and intended use

Humans should have ultimate control of our technology, and we strive to prevent unintended use of our products. Our user experience enforces accountability, responsible use, and transparency of consequences. We build protections into our products to detect and avoid unintended system behaviors. We achieve this through modern software engineering and rigorous testing on our entire systems including their constituent data and AI products, in isolation and in concert. Additionally, we rely on ongoing user research to help ensure that our products function as expected and can be appropriately disabled when necessary. Accountability is enforced by providing customers with insight into the provenance of data sources, methodologies, and design processes in easily understood and transparent language. Effective governance — of data, models, and software — is foundational to the ethical and accountable deployment of AI.

Published by Rebelliondefense in AI Ethical Principles, January 2023

4. We strive for transparency and integrity in all that we do

Our systems are held to specific standards in accordance with their level of technical ability and intended usage. Their input, capabilities, intended purpose, and limitations will be communicated clearly to our customers, and we provide means for oversight and control by customers and users. They are, and will always remain, in control of the deployment of our products. We actively support industry collaboration and will conduct research to further system transparency. We operate with integrity through our code of business conduct, our internal AI Ethics Steering Committee, and our external AI Ethics Advisory Panel.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

3. Scientific Integrity and Information Quality

The government’s regulatory and non regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance. Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should develop regulatory approaches to AI in a manner that both informs policy decisions and fosters public trust in AI. Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020

3. Scientific Integrity and Information Quality

The government’s regulatory and non regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance. Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should develop regulatory approaches to AI in a manner that both informs policy decisions and fosters public trust in AI. Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020