5. Algorithmic justice

The development of artificial intelligence should avoid the harm caused to the public by algorithm design, and must clarify the algorithm motive and interpretability to overcome the unfair influence caused by algorithm design and data collection.
Principle: Shanghai Initiative for the Safe Development of Artificial Intelligence, Aug 30, 2019

Published by Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security

Related Principles

· 8. Robustness

Trustworthy AI requires that algorithms are secure, reliable as well as robust enough to deal with errors or inconsistencies during the design, development, execution, deployment and use phase of the AI system, and to adequately cope with erroneous outcomes. Reliability & Reproducibility. Trustworthiness requires that the accuracy of results can be confirmed and reproduced by independent evaluation. However, the complexity, non determinism and opacity of many AI systems, together with sensitivity to training model building conditions, can make it difficult to reproduce results. Currently there is an increased awareness within the AI research community that reproducibility is a critical requirement in the field. Reproducibility is essential to guarantee that results are consistent across different situations, computational frameworks and input data. The lack of reproducibility can lead to unintended discrimination in AI decisions. Accuracy. Accuracy pertains to an AI’s confidence and ability to correctly classify information into the correct categories, or its ability to make correct predictions, recommendations, or decisions based on data or models. An explicit and well formed development and evaluation process can support, mitigate and correct unintended risks. Resilience to Attack. AI systems, like all software systems, can include vulnerabilities that can allow them to be exploited by adversaries. Hacking is an important case of intentional harm, by which the system will purposefully follow a different course of action than its original purpose. If an AI system is attacked, the data as well as system behaviour can be changed, leading the system to make different decisions, or causing the system to shut down altogether. Systems and or data can also become corrupted, by malicious intention or by exposure to unexpected situations. Poor governance, by which it becomes possible to intentionally or unintentionally tamper with the data, or grant access to the algorithms to unauthorised entities, can also result in discrimination, erroneous decisions, or even physical harm. Fall back plan. A secure AI has safeguards that enable a fall back plan in case of problems with the AI system. In some cases this can mean that the AI system switches from statistical to rule based procedure, in other cases it means that the system asks for a human operator before continuing the action.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

There is a significant risk that well intended AI research will be misused in ways which harm people. AI researchers and developers must consider the ethical implications of their work. The Cabinet Office's final Cyber Security & Technology Strategy must explicitly consider the risks of AI with respect to cyber security, and the Government should conduct further research as how to protect data sets from any attempts at data sabotage. The Government and Ofcom must commission research into the possible impact of AI on conventional and social media outlets, and investigate measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

4. As part of an overall “ethics by design” approach, artificial intelligence systems should be designed and developed responsibly, by applying the principles of privacy by default and privacy by design, in particular by:

a. implementing technical and organizational measures and procedures – proportional to the type of system that is developed – to ensure that data subjects’ privacy and personal data are respected, both when determining the means of the processing and at the moment of data processing, b. assessing and documenting the expected impacts on individuals and society at the beginning of an artificial intelligence project and for relevant developments during its entire life cycle, and c. identifying specific requirements for ethical and fair use of the systems and for respecting human rights as part of the development and operations of any artificial intelligence system,

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

6. Transparent regulation

The development of artificial intelligence should avoid the security risks caused by the technology black box, and it is necessary to ensure the unity of target functions and technologies through the establishment of reviewable, traceable, reputable regulatory mechanisms.

Published by Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security in Shanghai Initiative for the Safe Development of Artificial Intelligence, Aug 30, 2019

5. Data Provenance

A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.

Published by ACM US Public Policy Council (USACM) in Principles for Algorithmic Transparency and Accountability, Jan 12, 2017