Linking Artificial Intelligence Principles
Article 6: Transparent and explainable.
Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability (可解释、可预测、可追溯和可验证) for algorithmic logic, system decisions, and action outcomes.
Enhance the measurability of ethical principles such as security and controllability, transparency and explainability, privacy protection, and diversity and inclusiveness; and simultaneously build corresponding assessment capabilities.
This may include, but not limited to: making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability and predictability, and making the system more traceable, auditable and accountable.
Depending on the specific AI based system and its application area, the appropriate degrees of control measures, including the adaptability, accuracy and explainability of AI based systems, should be ensured.
Linked to this, explainability of the algorithmic decision making process, adapted to the persons involved, should be provided to the extent possible.
Ongoing research to develop explainability mechanisms should be pursued.
Transparency and explainability
Be transparent and produce explainable outputs
explainability – as a form of transparency – entails the capability to describe, inspect and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environments, as well as the provenance and dynamics of the data that is used and created by the system.
Interpretable and explainable AI will be essential for business and the public to understand, trust and effectively manage 'intelligent' machines.
Organisations that design and use algorithms need to take care in producing models that are as simple as possible, to explain how complex machines work.
Being able to explain the functionalities of a technology of which they appear to be in control of is essential to build trust with employees, customers and all stakeholders.
New technology, including AI systems, must be transparent and explainable
If we are to use AI to help make important decisions, it must be explainable.
(The mechanisms by which transparency is provided will vary significantly, for instance 1) for users of care or domestic robots, a why did you do that button which, when pressed, causes the robot to explain the action it just took, 2) for validation or certification agencies, the algorithms underlying the A IS and how they have been verified, and 3) for accident investigators, secure storage of sensor and internal state data, comparable to a flight data recorder or black box.)
• Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.
As specialists, members of the JSAI will not assert false or unclear claims and are obliged to explain the technical limitations or problems in AI systems truthfully and in a scientifically sound manner.
Developers should pay attention to the verifiability of inputs outputs of AI systems and the explainability of their judgments.
It is desirable that developers pay attention to the verifiability of the inputs and outputs of AI systems as well as the explainability of the judgment of AI systems within a reasonable scope in light of the characteristics of the technologies to be adopted and their use, so as to obtain the understanding and trust of the society including users of AI systems.
● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).
AI service providers and business users should pay attention to the verifiability of inputs outputs of AI systems or AI services and the explainability of their judgments.
B) Ensuring explainability
AI service providers and business users may be expected to ensure explainability on the judgments of AI.
In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is explainability expected to be ensured?
Especially in the case of using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of medical care, personnel evaluation and recruitment and financing, explainability on the judgments of AI may be expected to be ensured.
(For example, we have to pay attention to the current situation where deep learning has high prediction accuracy, but it is difficult to explain its judgment.)
Transparency and explainability
We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
Decisions made by AI should be explainable, Transparent & Fair
We will make AI systems as explainable as technically possible
Decisions and methodologies of AI systems which have a significant effect on individuals should be explainable to them, to the extent permitted by available technology
Transparent and explainable AI
Promote algorithmic transparency and algorithmic audit, to achieve understandable and explainable AI systems
The data provided by the black box could also assist robots in explaining their actions in language human users can understand, fostering better relationships and improving the user experience.
Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.