The principle "ASEAN Guide on AI Governance and Ethics" has mentioned the topic "transparency" in the following places:

    1. Transparency and Explainability

    transparency and Explainability

    1. Transparency and Explainability

    Transparency and explainability

    1. Transparency and Explainability

    transparency refers to providing disclosure on when an AI system is being used and the involvement of an AI system in decision making, what kind of data it uses, and its purpose.

    1. Transparency and Explainability

    explainability is the ability to communicate the reasoning behind an AI system’s decision in a way that is understandable to a range of people, as it is not always clear how an AI system has arrived at a conclusion.

    1. Transparency and Explainability

    In line with the principle of transparency, deployers have a responsibility to clearly disclose the implementation of an AI system to stakeholders and foster general awareness of the AI system being used.

    1. Transparency and Explainability

    An example of transparency in an AI enabled ecommerce platform is informing users that their purchase history is used by the platform’s recommendation algorithm to identify similar products and display them on the users’ feeds.

    1. Transparency and Explainability

    In line with the principle of explainability, developers and deployers designing, developing, and deploying AI systems should also strive to foster general understanding among users of how such systems work with simple and easy to understand explanations on how the AI system makes decisions.

    1. Transparency and Explainability

    For example, when an AI system is used to predict the likelihood of cardiac arrest in patients, explainability can be implemented by informing medical professionals of the most significant factors (e.g., age, blood pressure, etc.)

    1. Transparency and Explainability

    Where “black box” models are deployed, rendering it difficult, if not impossible to provide explanations as to the workings of the AI system, outcome based explanations, with a focus on explaining the impact of decisionmaking or results flowing from the AI system may be relied on.

    1. Transparency and Explainability

    • Ensuring traceability by building an audit trail to document the AI system development and decisionmaking process, implementing a black box recorder that captures all input data streams, or storing data appropriately to avoid degradation and alteration.

    1. Transparency and Explainability

    • Ensuring traceability by building an audit trail to document the AI system development and decisionmaking process, implementing a black box recorder that captures all input data streams, or storing data appropriately to avoid degradation and alteration.

    1. Transparency and Explainability

    • Facilitating auditability by keeping a comprehensive record of data provenance, procurement, preprocessing, lineage, storage, and security.

    1. Transparency and Explainability

    Deployers should, however, note that auditability does not necessarily entail making certain confidential information about business models or intellectual property related to the AI system publicly available.

    1. Transparency and Explainability

    A risk based approach can be taken towards identifying the subset of AI enabled features in the AI system for which implemented auditability is necessary to align with regulatory requirements or industry practices.

    1. Transparency and Explainability

    In cases where AI systems are procured directly from developers, deployers will have to work together with these developers to achieve transparency.

    5. Privacy and Data Governance

    Organisations should be transparent about their data collection practices, including the types of data collected, how it is used, and who has access to it.