2. Continued attention and vigilance, as well as accountability, for the potential effects and consequences of, artificial intelligence systems should be ensured, in particular by:

a. promoting accountability of all relevant stakeholders to individuals, supervisory authorities and other third parties as appropriate, including through the realization of audit, continuous monitoring and impact assessment of artificial intelligence systems, and periodic review of oversight mechanisms; b. fostering collective and joint responsibility, involving the whole chain of actors and stakeholders, for example with the development of collaborative standards and the sharing of best practices, c. investing in awareness raising, education, research and training in order to ensure a good level of information on and understanding of artificial intelligence and its potential effects in society, and d. establishing demonstrable governance processes for all relevant actors, such as relying on trusted third parties or the setting up of independent ethics committees,
Principle: Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

Related Principles

I. Human agency and oversight

AI systems should support individuals in making better, more informed choices in accordance with their goals. They should act as enablers to a flourishing and equitable society by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. The overall wellbeing of the user should be central to the system's functionality. Human oversight helps ensuring that an AI system does not undermine human autonomy or causes other adverse effects. Depending on the specific AI based system and its application area, the appropriate degrees of control measures, including the adaptability, accuracy and explainability of AI based systems, should be ensured. Oversight may be achieved through governance mechanisms such as ensuring a human in the loop, human on the loop, or human in command approach. It must be ensured that public authorities have the ability to exercise their oversight powers in line with their mandates. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:

a. investing in public and private scientific research on explainable artificial intelligence, b. promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience, c. making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided, and d. guaranteeing the right to informational self determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems, e. providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

4. As part of an overall “ethics by design” approach, artificial intelligence systems should be designed and developed responsibly, by applying the principles of privacy by default and privacy by design, in particular by:

a. implementing technical and organizational measures and procedures – proportional to the type of system that is developed – to ensure that data subjects’ privacy and personal data are respected, both when determining the means of the processing and at the moment of data processing, b. assessing and documenting the expected impacts on individuals and society at the beginning of an artificial intelligence project and for relevant developments during its entire life cycle, and c. identifying specific requirements for ethical and fair use of the systems and for respecting human rights as part of the development and operations of any artificial intelligence system,

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

(Conclusion)

Taking into consideration the principles above, the 40th International Conference of Data Protection and Privacy Commissioners calls for common governance principles on artificial intelligence to be established, fostering concerted international efforts in this field, in order to ensure that its development and use take place in accordance with ethics and human values, and respect human dignity. These common governance principles must be able to tackle the challenges raised by the rapid evolutions of artificial intelligence technologies, on the basis of a multi stakeholder approach in order to address all cross sectoral issues at stake. They must take place at an international level since the development of artificial intelligence is a trans border phenomenon and may affect all humanity. The Conference should be involved in this international effort, working with and supporting general and sectoral authorities in other fields such as competition, market and consumer regulation.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

2. Public Participation

Public participation, especially in those instances where AI uses information about individuals, will improve agency accountability and regulatory outcomes, as well as increase public trust and confidence. Agencies should provide ample opportunities for the public to national standard for a specific aspect related to AI is not essential, however, agencies should provide information and participate in all stages of the rulemaking process, to the extent feasible and consistent with legal requirements (including legal constraints on participation in certain situations, for example, national security preventing imminent threat to or responding to emergencies). Agencies are also encouraged, to the extent practicable, to inform the public and promote awareness and widespread availability of standards and the creation of other informative documents.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020