I. Human agency and oversight
Depending on the specific AI based system and its application area, the appropriate degrees of control measures, including the adaptability, accuracy and explainability of AI based systems, should be ensured.
IV. Transparency
transparency
IV. Transparency
The traceability of AI systems should be ensured; it is important to log and document both the decisions made by the systems, as well as the entire process (including a description of data gathering and labelling, and a description of the algorithm used) that yielded the decisions.
IV. Transparency
Linked to this, explainability of the algorithmic decision making process, adapted to the persons involved, should be provided to the extent possible.
IV. Transparency
Ongoing research to develop explainability mechanisms should be pursued.
IV. Transparency
In addition, explanations of the degree to which an AI system influences and shapes the organisational decision making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency).
IV. Transparency
In addition, explanations of the degree to which an AI system influences and shapes the organisational decision making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency).
VII. Accountability
auditability of AI systems is key in this regard, as the assessment of AI systems by internal and external auditors, and the availability of such evaluation reports, strongly contributes to the trustworthiness of the technology.
VII. Accountability
External auditability should especially be ensured in applications affecting fundamental rights, including safety critical applications.