· Test and validation

Ensure AI systems go through rigorous test and validation to achieve reasonable expectations of performance
Principle: "ARCC": An Ethical Framework for Artificial Intelligence, Sep 18, 2018

Published by Tencent Research Institute

Related Principles

· 3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· 1.3 Robust and Representative Data

To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems. AI systems need to leverage large datasets, and the availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

5. We uphold quality and safety standards

As with any of our products, our AI software is subject to our quality assurance process, which we continuously adapt when necessary. Our AI software undergoes thorough testing under real world scenarios to firmly validate they are fit for purpose and that the product specifications are met. We work closely with our customers and users to uphold and further improve our systems’ quality, safety, reliability, and security.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

· 1) Accountability:

Artificial intelligence should be auditable and traceable. We are committed to confirming test standards, deployment processes and specifications, ensuring algorithms verifiable, and gradually improving the accountability and supervision mechanism of artificial intelligence systems.

Published by Youth Work Committee of Shanghai Computer Society in Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019

7. Validation and Testing

Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

Published by ACM US Public Policy Council (USACM) in Principles for Algorithmic Transparency and Accountability, Jan 12, 2017