· 7. Respect for Privacy

Privacy and data protection must be guaranteed at all stages of the life cycle of the AI system. This includes all data provided by the user, but also all information generated about the user over the course of his or her interactions with the AI system (e.g. outputs that the AI system generated for specific users, how users responded to particular recommendations, etc.). Digital records of human behaviour can reveal highly sensitive data, not only in terms of preferences, but also regarding sexual orientation, age, gender, religious and political views. The person in control of such information could use this to his her advantage. Organisations must be mindful of how data is used and might impact users, and ensure full compliance with the GDPR as well as other applicable regulation dealing with privacy and data protection.
Principle: Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence

Related Principles

1. Transparency and Explainability

Transparency refers to providing disclosure on when an AI system is being used and the involvement of an AI system in decision making, what kind of data it uses, and its purpose. By disclosing to individuals that AI is used in the system, individuals will become aware and can make an informed choice of whether to use the AIenabled system. Explainability is the ability to communicate the reasoning behind an AI system’s decision in a way that is understandable to a range of people, as it is not always clear how an AI system has arrived at a conclusion. This allows individuals to know the factors contributing to the AI system’s recommendation. In order to build public trust in AI, it is important to ensure that users are aware of the use of AI technology and understand how information from their interaction is used and how the AI system makes its decisions using the information provided. In line with the principle of transparency, deployers have a responsibility to clearly disclose the implementation of an AI system to stakeholders and foster general awareness of the AI system being used. With the increasing use of AI in many businesses and industries, the public is becoming more aware and interested in knowing when they are interacting with AI systems. Knowing when and how AI systems interact with users is also important in helping users discern the potential harm of interacting with an AI system that is not behaving as intended. In the past, AI algorithms have been found to discriminate against female job applicants and have failed to accurately recognise the faces of dark skinned women. It is important for users to be aware of the expected behaviour of the AI systems so they can make more informed decisions about the potential harm of interacting with AI systems. An example of transparency in an AI enabled ecommerce platform is informing users that their purchase history is used by the platform’s recommendation algorithm to identify similar products and display them on the users’ feeds. In line with the principle of explainability, developers and deployers designing, developing, and deploying AI systems should also strive to foster general understanding among users of how such systems work with simple and easy to understand explanations on how the AI system makes decisions. Understanding how AI systems work will help humans know when to trust its decisions. Explanations can have varying degrees of complexity, ranging from a simple text explanation of which factors more significantly affected the decisionmaking process to displaying a heatmap over the relevant text or on the area of an image that led to the system’s decision. For example, when an AI system is used to predict the likelihood of cardiac arrest in patients, explainability can be implemented by informing medical professionals of the most significant factors (e.g., age, blood pressure, etc.) that influenced the AI system’s decision so that they can subsequently make informed decisions on their own. Where “black box” models are deployed, rendering it difficult, if not impossible to provide explanations as to the workings of the AI system, outcome based explanations, with a focus on explaining the impact of decisionmaking or results flowing from the AI system may be relied on. Alternatively, deployers may also consider focusing on aspects relating to the quality of the AI system or preparing information that could build user confidence in the outcomes of an AI system’s processing behaviour. Some of these measures are: • Documenting the repeatability of results produced by the AI system. Some practices to demonstrate repeatability include conducting repeatability assessments to ensure deployments in live environments are repeatable and performing counterfactual fairness testing to ensure that the AI system’s decisions are the same in both the real world and in the counterfactual world. Repeatability refers to the ability of the system to consistently obtain the same results, given the same scenario. Repeatability often applies within the same environment, with the same data and the same computational conditions. • Ensuring traceability by building an audit trail to document the AI system development and decisionmaking process, implementing a black box recorder that captures all input data streams, or storing data appropriately to avoid degradation and alteration. • Facilitating auditability by keeping a comprehensive record of data provenance, procurement, preprocessing, lineage, storage, and security. Such information can also be centralised digitally in a process log to increase capacity to cater the presentation of results to different tiers of stakeholders with different interests and levels of expertise. Deployers should, however, note that auditability does not necessarily entail making certain confidential information about business models or intellectual property related to the AI system publicly available. A risk based approach can be taken towards identifying the subset of AI enabled features in the AI system for which implemented auditability is necessary to align with regulatory requirements or industry practices. • Using AI Model Cards, which are short documents accompanying trained machine learning models that disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. In cases where AI systems are procured directly from developers, deployers will have to work together with these developers to achieve transparency. More on this will be covered in later sections of the Guide.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2021

· (3) Privacy

In society premised on AI, it is possible to estimate each person’s political position, economic situation, hobbies preferences, etc. with high accuracy from data on the data subject’s personal behavior. This means, when utilizing AI, that more careful treatment of personal data is necessary than simply utilizing personal information. To ensure that people are not suffered disadvantages from unexpected sharing or utilization of personal data through the internet for instance, each stakeholder must handle personal data based on the following principles. Companies or government should not infringe individual person’s freedom, dignity and equality in utilization of personal data with AI technologies. AI that uses personal data should have a mechanism that ensures accuracy and legitimacy and enable the person herself himself to be substantially involved in the management of her his privacy data. As a result, when using the AI, people can provide personal data without concerns and effectively benefit from the data they provide. Personal data must be properly protected according to its importance and sensitivity. Personal data varies from those unjust use of which would be likely to greatly affect rights and benefits of individuals (Typically thought and creed, medical history, criminal record, etc.) to those that are semi public in social life. Taking this into consideration, we have to pay enough attention to the balance between the use and protection of personal data based on the common understanding of society and the cultural background.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

III. Privacy and Data Governance

Privacy and data protection must be guaranteed at all stages of the AI system’s life cycle. Digital records of human behaviour may allow AI systems to infer not only individuals’ preferences, age and gender but also their sexual orientation, religious or political views. To allow individuals to trust the data processing, it must be ensured that they have full control over their own data, and that data concerning them will not be used to harm or discriminate against them. In addition to safeguarding privacy and personal data, requirements must be fulfilled to ensure high quality AI systems. The quality of the data sets used is paramount to the performance of AI systems. When data is gathered, it may reflect socially constructed biases, or contain inaccuracies, errors and mistakes. This needs to be addressed prior to training an AI system with any given data set. In addition, the integrity of the data must be ensured. Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment. This should also apply to AI systems that were not developed in house but acquired elsewhere. Finally, the access to data must be adequately governed and controlled.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

(h) Data protection and privacy

In an age of ubiquitous and massive collection of data through digital communication technologies, the right to protection of personal information and the right to respect for privacy are crucially challenged. Both physical AI robots as part of the Internet of Things, as well as AI softbots that operate via the World Wide Web must comply with data protection regulations and not collect and spread data or be run on sets of data for whose use and dissemination no informed consent has been given. ‘Autonomous’ systems must not interfere with the right to private life which comprises the right to be free from technologies that influence personal development and opinions, the right to establish and develop relationships with other human beings, and the right to be free from surveillance. Also in this regard, exact criteria should be defined and mechanisms established that ensure ethical development and ethically correct application of ‘autonomous’ systems. In light of concerns with regard to the implications of ‘autonomous’ systems on private life and privacy, consideration may be given to the ongoing debate about the introduction of two new rights: the right to meaningful human contact and the right to not be profiled, measured, analysed, coached or nudged.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

Plan and Design:

1 The planning and design of the AI system and its associated algorithm must be configured and modelled in a manner such that there is respect for the protection of the privacy of individuals, personal data is not misused and exploited, and the decision criteria of the automated technology is not based on personally identifying characteristics or information. 2 The use of personal information should be limited only to that which is necessary for the proper functioning of the system. The design of AI systems resulting in the profiling of individuals or communities may only occur if approved by Chief Compliance and Ethics Officer, Compliance Officer or in compliance with a code of ethics and conduct developed by a national regulatory authority for the specific sector or industry. 3 The security and protection blueprint of the AI system, including the data to be processed and the algorithm to be used, should be aligned to best practices to be able to withstand cyberattacks and data breach attempts. 4 Privacy and security legal frameworks and standards should be followed and customized for the particular use case or organization. 5 An important aspect of privacy and security is data architecture; consequently, data classification and profiling should be planned to define the levels of protection and usage of personal data. 6 Security mechanisms for de identification should be planned for the sensitive or personal data in the system. Furthermore, read write update actions should be authorized for the relevant groups.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022