Article 6: Transparent and explainable.
Continuously improve the transparency of artificial intelligence systems. Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability (可解释、可预测、可追溯和可验证) for algorithmic logic, system decisions, and action outcomes.
Accuracy means that companies need to ensure that the AI systems they use produce correct, precise and reliable results. They need to be free from biases and systematic errors deriving, for example, from an unfair sampling of a population, or from an estimation process that does not give accurate results.
Interpretable and explainable AI will be essential for business and the public to understand, trust and effectively manage 'intelligent' machines. Organisations that design and use algorithms need to take care in producing models that are as simple as possible, to explain how complex machines work.
3. New technology, including AI systems, must be transparent and explainable
For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.
Decisions made by AI should be Explainable, Transparent & Fair