3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:

a. investing in public and private scientific research on explainable artificial intelligence, b. promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience, c. making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided, and d. guaranteeing the right to informational self determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems, e. providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.
Principle: Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

Related Principles

· 5. The Principle of Explicability: “Operate transparently”

Transparency is key to building and maintaining citizen’s trust in the developers of AI systems and AI systems themselves. Both technological and business model transparency matter from an ethical standpoint. Technological transparency implies that AI systems be auditable, comprehensible and intelligible by human beings at varying levels of comprehension and expertise. Business model transparency means that human beings are knowingly informed of the intention of developers and technology implementers of AI systems. Explicability is a precondition for achieving informed consent from individuals interacting with AI systems and in order to ensure that the principle of explicability and non maleficence are achieved the requirement of informed consent should be sought. Explicability also requires accountability measures be put in place. Individuals and groups may request evidence of the baseline parameters and instructions given as inputs for AI decision making (the discovery or prediction sought by an AI system or the factors involved in the discovery or prediction made) by the organisations and developers of an AI system, the technology implementers, or another party in the supply chain.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· 6. Respect for (& Enhancement of) Human Autonomy

AI systems should be designed not only to uphold rights, values and principles, but also to protect citizens in all their diversity from governmental and private abuses made possible by AI technology, ensuring a fair distribution of the benefits created by AI technologies, protect and enhance a plurality of human values, and enhance self determination and autonomy of individual users and communities. AI products and services, possibly through "extreme" personalisation approaches, may steer individual choice by potentially manipulative "nudging". At the same time, people are increasingly willing and expected to delegate decisions and actions to machines (e.g. recommender systems, search engines, navigation systems, virtual coaches and personal assistants). Systems that are tasked to help the user, must provide explicit support to the user to promote her his own preferences, and set the limits for system intervention, ensuring that the overall wellbeing of the user as explicitly defined by the user her himself is central to system functionality.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

2. Continued attention and vigilance, as well as accountability, for the potential effects and consequences of, artificial intelligence systems should be ensured, in particular by:

a. promoting accountability of all relevant stakeholders to individuals, supervisory authorities and other third parties as appropriate, including through the realization of audit, continuous monitoring and impact assessment of artificial intelligence systems, and periodic review of oversight mechanisms; b. fostering collective and joint responsibility, involving the whole chain of actors and stakeholders, for example with the development of collaborative standards and the sharing of best practices, c. investing in awareness raising, education, research and training in order to ensure a good level of information on and understanding of artificial intelligence and its potential effects in society, and d. establishing demonstrable governance processes for all relevant actors, such as relying on trusted third parties or the setting up of independent ethics committees,

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

4. As part of an overall “ethics by design” approach, artificial intelligence systems should be designed and developed responsibly, by applying the principles of privacy by default and privacy by design, in particular by:

a. implementing technical and organizational measures and procedures – proportional to the type of system that is developed – to ensure that data subjects’ privacy and personal data are respected, both when determining the means of the processing and at the moment of data processing, b. assessing and documenting the expected impacts on individuals and society at the beginning of an artificial intelligence project and for relevant developments during its entire life cycle, and c. identifying specific requirements for ethical and fair use of the systems and for respecting human rights as part of the development and operations of any artificial intelligence system,

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

3. Scientific Integrity and Information Quality

The government’s regulatory and non regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance. Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should develop regulatory approaches to AI in a manner that both informs policy decisions and fosters public trust in AI. Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020