3. Explainability

o We strive to develop ML solutions that are explainable and direct. Our ML data discovery and data usage models are designed with understanding as a key attribute, measured against an expressed desired outcome. For example, if the ML model is to provide an employee specific learning or training recommendations, we actively measure both the selection of those recommendations as well as the outcome or results of the learning module for that individual. In turn, we provide supporting information to outline the results of the recommendation’s effectiveness. ADP is also committed to providing individuals with the right to question an automated decision, and to require a human review of the decision.
Principle: ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

Published by ADP

Related Principles

1. Transparency and Explainability

Transparency refers to providing disclosure on when an AI system is being used and the involvement of an AI system in decision making, what kind of data it uses, and its purpose. By disclosing to individuals that AI is used in the system, individuals will become aware and can make an informed choice of whether to use the AIenabled system. Explainability is the ability to communicate the reasoning behind an AI system’s decision in a way that is understandable to a range of people, as it is not always clear how an AI system has arrived at a conclusion. This allows individuals to know the factors contributing to the AI system’s recommendation. In order to build public trust in AI, it is important to ensure that users are aware of the use of AI technology and understand how information from their interaction is used and how the AI system makes its decisions using the information provided. In line with the principle of transparency, deployers have a responsibility to clearly disclose the implementation of an AI system to stakeholders and foster general awareness of the AI system being used. With the increasing use of AI in many businesses and industries, the public is becoming more aware and interested in knowing when they are interacting with AI systems. Knowing when and how AI systems interact with users is also important in helping users discern the potential harm of interacting with an AI system that is not behaving as intended. In the past, AI algorithms have been found to discriminate against female job applicants and have failed to accurately recognise the faces of dark skinned women. It is important for users to be aware of the expected behaviour of the AI systems so they can make more informed decisions about the potential harm of interacting with AI systems. An example of transparency in an AI enabled ecommerce platform is informing users that their purchase history is used by the platform’s recommendation algorithm to identify similar products and display them on the users’ feeds. In line with the principle of explainability, developers and deployers designing, developing, and deploying AI systems should also strive to foster general understanding among users of how such systems work with simple and easy to understand explanations on how the AI system makes decisions. Understanding how AI systems work will help humans know when to trust its decisions. Explanations can have varying degrees of complexity, ranging from a simple text explanation of which factors more significantly affected the decisionmaking process to displaying a heatmap over the relevant text or on the area of an image that led to the system’s decision. For example, when an AI system is used to predict the likelihood of cardiac arrest in patients, explainability can be implemented by informing medical professionals of the most significant factors (e.g., age, blood pressure, etc.) that influenced the AI system’s decision so that they can subsequently make informed decisions on their own. Where “black box” models are deployed, rendering it difficult, if not impossible to provide explanations as to the workings of the AI system, outcome based explanations, with a focus on explaining the impact of decisionmaking or results flowing from the AI system may be relied on. Alternatively, deployers may also consider focusing on aspects relating to the quality of the AI system or preparing information that could build user confidence in the outcomes of an AI system’s processing behaviour. Some of these measures are: • Documenting the repeatability of results produced by the AI system. Some practices to demonstrate repeatability include conducting repeatability assessments to ensure deployments in live environments are repeatable and performing counterfactual fairness testing to ensure that the AI system’s decisions are the same in both the real world and in the counterfactual world. Repeatability refers to the ability of the system to consistently obtain the same results, given the same scenario. Repeatability often applies within the same environment, with the same data and the same computational conditions. • Ensuring traceability by building an audit trail to document the AI system development and decisionmaking process, implementing a black box recorder that captures all input data streams, or storing data appropriately to avoid degradation and alteration. • Facilitating auditability by keeping a comprehensive record of data provenance, procurement, preprocessing, lineage, storage, and security. Such information can also be centralised digitally in a process log to increase capacity to cater the presentation of results to different tiers of stakeholders with different interests and levels of expertise. Deployers should, however, note that auditability does not necessarily entail making certain confidential information about business models or intellectual property related to the AI system publicly available. A risk based approach can be taken towards identifying the subset of AI enabled features in the AI system for which implemented auditability is necessary to align with regulatory requirements or industry practices. • Using AI Model Cards, which are short documents accompanying trained machine learning models that disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. In cases where AI systems are procured directly from developers, deployers will have to work together with these developers to achieve transparency. More on this will be covered in later sections of the Guide.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

Transparency and explainability

There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them. This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question. Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons for users, what the system is doing and why for creators, including those undertaking the validation and certification of AI, the systems’ processes and input data for those deploying and operating the system, to understand processes and input data for an accident investigator, if accidents occur for regulators in the context of investigations for those in the legal process, to inform evidence and decision‐making for the public, to build confidence in the technology Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making. This principle also aims to ensure people have the ability to find out when an AI system is engaging with them (regardless of the level of impact), and are able to obtain a reasonable disclosure regarding the AI system.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· Prepare Input Data:

1 Following the best practice of responsible data acquisition, handling, classification, and management must be a priority to ensure that results and outcomes align with the AI system’s set goals and objectives. Effective data quality soundness and procurement begin by ensuring the integrity of the data source and data accuracy in representing all observations to avoid the systematic disadvantaging of under represented or advantaging over represented groups. The quantity and quality of the data sets should be sufficient and accurate to serve the purpose of the system. The sample size of the data collected or procured has a significant impact on the accuracy and fairness of the outputs of a trained model. 2 Sensitive personal data attributes which are defined in the plan and design phase should not be included in the model data not to feed the existing bias on them. Also, the proxies of the sensitive features should be analyzed and not included in the input data. In some cases, this may not be possible due to the accuracy or objective of the AI system. In this case, the justification of the usage of the sensitive personal data attributes or their proxies should be provided. 3 Causality based feature selection should be ensured. Selected features should be verified with business owners and non technical teams. 4 Automated decision support technologies present major risks of bias and unwanted application at the deployment phase, so it is critical to set out mechanisms to prevent harmful and discriminatory results at this phase.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Plan and Design:

1 When designing a transparent and trusted AI system, it is vital to ensure that stakeholders affected by AI systems are fully aware and informed of how outcomes are processed. They should further be given access to and an explanation of the rationale for decisions made by the AI technology in an understandable and contextual manner. Decisions should be traceable. AI system owners must define the level of transparency for different stakeholders on the technology based on data privacy, sensitivity, and authorization of the stakeholders. 2 The AI system should be designed to include an information section in the platform to give an overview of the AI model decisions as part of the overall transparency application of the technology. Information sharing as a sub principle should be adhered to with end users and stakeholders of the AI system upon request or open to the public, depending on the nature of the AI system and target market. The model should establish a process mechanism to log and address issues and complaints that arise to be able to resolve them in a transparent and explainable manner. Prepare Input Data: 1 The data sets and the processes that yield the AI system’s decision should be documented to the best possible standard to allow for traceability and an increase in transparency. 2 The data sets should be assessed in the context of their accuracy, suitability, validity, and source. This has a direct effect on the training and implementation of these systems since the criteria for the data’s organization, and structuring must be transparent and explainable in their acquisition and collection adhering to data privacy regulations and intellectual property standards and controls.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

3 Ensure transparency, explainability and intelligibility

AI should be intelligible or understandable to developers, users and regulators. Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology. Transparency requires that sufficient information (described below) be published or documented before the design and deployment of an AI technology. Such information should facilitate meaningful public consultation and debate on how the AI technology is designed and how it should be used. Such information should continue to be published and documented regularly and in a timely manner after an AI technology is approved for use. Transparency will improve system quality and protect patient and public health safety. For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight. It must be possible to audit an AI technology, including if something goes wrong. Transparency should include accurate information about the assumptions and limitations of the technology, operating protocols, the properties of the data (including methods of data collection, processing and labelling) and development of the algorithmic model. AI technologies should be explainable to the extent possible and according to the capacity of those to whom the explanation is directed. Data protection laws already create specific obligations of explainability for automated decision making. Those who might request or require an explanation should be well informed, and the educational information must be tailored to each population, including, for example, marginalized populations. Many AI technologies are complex, and the complexity might frustrate both the explainer and the person receiving the explanation. There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability). All algorithms should be tested rigorously in the settings in which the technology will be used in order to ensure that it meets standards of safety and efficacy. The examination and validation should include the assumptions, operational protocols, data properties and output decisions of the AI technology. Tests and evaluations should be regular, transparent and of sufficient breadth to cover differences in the performance of the algorithm according to race, ethnicity, gender, age and other relevant human characteristics. There should be robust, independent oversight of such tests and evaluation to ensure that they are conducted safely and effectively. Health care institutions, health systems and public health agencies should regularly publish information about how decisions have been made for adoption of an AI technology and how the technology will be evaluated periodically, its uses, its known limitations and the role of decision making, which can facilitate external auditing and oversight.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021