7. Validation and Testing

Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.
Principle: Principles for Algorithmic Transparency and Accountability, Jan 12, 2017

Published by ACM US Public Policy Council (USACM)

Related Principles

· 10. Transparency

Transparency concerns the reduction of information asymmetry. Explainability – as a form of transparency – entails the capability to describe, inspect and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environments, as well as the provenance and dynamics of the data that is used and created by the system. Being explicit and open about choices and decisions concerning data sources, development processes, and stakeholders should be required from all models that use human data or affect human beings or can have other morally significant impact.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

2. Transparency

For cognitive systems to fulfill their world changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear: When and for what purposes AI is being applied in the cognitive solutions we develop and deploy. The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions. The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.

Published by IBM in Principles for the Cognitive Era, Jan 17, 2017

4. Principle 4 — Transparency

Issue: How can we ensure that A IS are transparent? [Candidate Recommendation] Develop new standards* that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self assessing transparency during development and suggest mechanisms for improving transparency. (The mechanisms by which transparency is provided will vary significantly, for instance 1) for users of care or domestic robots, a why did you do that button which, when pressed, causes the robot to explain the action it just took, 2) for validation or certification agencies, the algorithms underlying the A IS and how they have been verified, and 3) for accident investigators, secure storage of sensor and internal state data, comparable to a flight data recorder or black box.) *Note that IEEE Standards Working Group P7001™ has been set up in response to this recommendation.

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

3. Scientific Integrity and Information Quality

The government’s regulatory and non regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance. Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should develop regulatory approaches to AI in a manner that both informs policy decisions and fosters public trust in AI. Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020

4. Risk Assessment and Management

Regulatory and non regulatory approaches to AI should be based on a consistent application of risk assessment and risk management across various agencies and various technologies. It is not necessary to mitigate every foreseeable risk; in fact, a foundational principle of regulatory policy is that all activities involve tradeoffs. Instead, a risk based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits. Agencies should be transparent about their evaluations of risk and re evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability. Correspondingly, the magnitude and nature of the consequences should an AI tool fail, or for that matter succeed, can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks. Specifically, agencies should follow the direction in Executive Order 12866, “Regulatory Planning and Review,”to consider the degree and nature of the risks posed by various activities within their jurisdiction. Such an approach will, where appropriate, avoid hazard based and unnecessarily precautionary approaches to regulation that could unjustifiably inhibit innovation.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020