The principle "AI Ethics Principles" has mentioned the topic "explainable" in the following places:

    Principle 6 – Transparency & Explainability

    Principle 6 – Transparency & explainability

    Principle 6 – Transparency & Explainability

    The transparency and explainability principle is crucial for building and maintaining trust in AI systems and technologies.

    Principle 6 – Transparency & Explainability

    AI systems must be built with a high level of clarity and explainability as well as features to track the stages of automated decision making, particularly those that may lead to detrimental effects on data subjects.

    Principle 6 – Transparency & Explainability

    It follows that data, algorithms, capabilities, processes, and purpose of the AI system need to be transparent and communicated as well as explainable to those who are directly and indirectly affected.

    Principle 6 – Transparency & Explainability

    The degree to which the system is traceable, auditable, transparent, and explainable is dependent on the context and purpose of the AI system and the severity of the outcomes that may result from the technology.

    · Plan and Design:

    The model should establish a process mechanism to log and address issues and complaints that arise to be able to resolve them in a transparent and explainable manner.

    · Plan and Design:

    This has a direct effect on the training and implementation of these systems since the criteria for the data’s organization, and structuring must be transparent and explainable in their acquisition and collection adhering to data privacy regulations and intellectual property standards and controls.

    · Build and Validate:

    Algorithms should be developed in a transparent way to ensure that input transparency is evident and explainable to the end users of the AI system to be able to provide evidence and information on the data used to process the decisions that have been processed.

    · Build and Validate:

    2 Transparent and explainable algorithms ensure that stakeholders affected by AI systems, both individuals and communities, are fully informed when an outcome is processed by the AI system by providing the opportunity to request explanatory information from the AI system owner.

    · Build and Validate:

    This enables the identification of the AI decision and its respective analysis which facilitates its auditability as well as its explainability.