· Article 6: Transparent and explainable.

Continuously improve the transparency of artificial intelligence systems. Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability for algorithmic logic, system decisions, and action outcomes.
Principle: Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment), May 31, 2019

Published by Artificial Intelligence Industry Alliance (AIIA), China

Related Principles

3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:

a. investing in public and private scientific research on explainable artificial intelligence, b. promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience, c. making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided, and d. guaranteeing the right to informational self determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems, e. providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

Chapter 3. The Norms of Research and Development

  10. Strengthen the awareness of self discipline. Strengthen self discipline in activities related to AI research and development, actively integrate AI ethics into every phase of technology research and development, consciously carry out self censorship, strengthen self management, and do not engage in AI research and development that violates ethics and morality.   11. Improve data quality. In the phases of data collection, storage, use, processing, transmission, provision, disclosure, etc., strictly abide by data related laws, standards and norms. Improve the completeness, timeliness, consistency, normativeness and accuracy of data.   12. Enhance safety, security and transparency. In the phases of algorithm design, implementation, and application, etc., improve transparency, interpretability, understandability, reliability, and controllability, enhance the resilience, adaptability, and the ability of anti interference of AI systems, and gradually realize verifiable, auditable, supervisable, traceable, predictable and trustworthy AI.   13. Avoid bias and discrimination. During the process of data collection and algorithm development, strengthen ethics review, fully consider the diversity of demands, avoid potential data and algorithmic bias, and strive to achieve inclusivity, fairness and non discrimination of AI systems.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

10. Responsibility, accountability and transparency

a. Build trust by ensuring that designers and operators are responsible and accountable for their systems, applications and algorithms, and to ensure that such systems, applications and algorithms operate in a transparent and fair manner. b. To make available externally visible and impartial avenues of redress for adverse individual or societal effects of an algorithmic decision system, and to designate a role to a person or office who is responsible for the timely remedy of such issues. c. Incorporate downstream measures and processes for users or stakeholders to verify how and when AI technology is being applied. d. To keep detailed records of design processes and decision making.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

· Build and Validate:

1 To develop a sound and functional AI system that is both reliable and safe, the AI system’s technical construct should be accompanied by a comprehensive methodology to test the quality of the predictive data based systems and models according to standard policies and protocols. 2 To ensure the technical robustness of an AI system rigorous testing, validation, and re assessment as well as the integration of adequate mechanisms of oversight and controls into its development is required. System integration test sign off should be done with relevant stakeholders to minimize risks and liability. 3 Automated AI systems involving scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions should trigger human oversight and final determination. Furthermore, AI systems should not be used for social scoring or mass surveillance purposes.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· 1) Accountability:

Artificial intelligence should be auditable and traceable. We are committed to confirming test standards, deployment processes and specifications, ensuring algorithms verifiable, and gradually improving the accountability and supervision mechanism of artificial intelligence systems.

Published by Youth Work Committee of Shanghai Computer Society in Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019