· 1) Accountability:

Artificial intelligence should be auditable and traceable. We are committed to confirming test standards, deployment processes and specifications, ensuring algorithms verifiable, and gradually improving the accountability and supervision mechanism of artificial intelligence systems.
Principle: Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019

Published by Youth Work Committee of Shanghai Computer Society

Related Principles

· Article 6: Transparent and explainable.

Continuously improve the transparency of artificial intelligence systems. Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability for algorithmic logic, system decisions, and action outcomes.

Published by Artificial Intelligence Industry Alliance (AIIA), China in Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment), May 31, 2019

3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:

a. investing in public and private scientific research on explainable artificial intelligence, b. promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience, c. making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided, and d. guaranteeing the right to informational self determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems, e. providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

Design for human control, accountability, and intended use

Humans should have ultimate control of our technology, and we strive to prevent unintended use of our products. Our user experience enforces accountability, responsible use, and transparency of consequences. We build protections into our products to detect and avoid unintended system behaviors. We achieve this through modern software engineering and rigorous testing on our entire systems including their constituent data and AI products, in isolation and in concert. Additionally, we rely on ongoing user research to help ensure that our products function as expected and can be appropriately disabled when necessary. Accountability is enforced by providing customers with insight into the provenance of data sources, methodologies, and design processes in easily understood and transparent language. Effective governance — of data, models, and software — is foundational to the ethical and accountable deployment of AI.

Published by Rebelliondefense in AI Ethical Principles, January 2023

3. Clear responsibility

The development of artificial intelligence should establish a complete framework of safety responsibility, and we need to innovate laws, regulations and ethical norms for the application of artificial intelligence, and clarify the mechanism of identification and sharing of safety responsibility of artificial intelligence.

Published by Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security in Shanghai Initiative for the Safe Development of Artificial Intelligence, Aug 30, 2019

· Build and Validate:

1 To develop a sound and functional AI system that is both reliable and safe, the AI system’s technical construct should be accompanied by a comprehensive methodology to test the quality of the predictive data based systems and models according to standard policies and protocols. 2 To ensure the technical robustness of an AI system rigorous testing, validation, and re assessment as well as the integration of adequate mechanisms of oversight and controls into its development is required. System integration test sign off should be done with relevant stakeholders to minimize risks and liability. 3 Automated AI systems involving scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions should trigger human oversight and final determination. Furthermore, AI systems should not be used for social scoring or mass surveillance purposes.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022