As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.
In a society premised on AI, we have to eliminate disparities, divisions, or socially weak people. Therefore, policy makers and managers of the enterprises involved in AI must have an accurate understanding of AI, the knowledge for proper use of AI in society and AI ethics, taking into account the complexity of AI and the possibility that AI can be misused intentionally. The AI user should understand the outline of AI and be educated to utilize it properly because AI is much more complicated than the already developed conventional tools. On the other hand, from the viewpoint of AI’s contributions to society, it is important for the developers of AI to learn about the social sciences, business models, and ethics, including normative awareness of norms and wide range of liberal arts not to mention the basis possibly generated by AI.
From the above point of view, it is necessary to establish an educational environment that provides AI literacy according to the following principles, equally to every person.
In order to get rid of disparity between people having a good knowledge about AI technology and those being weak in it, opportunities for education such as AI literacy are widely provided in early childhood education and primary and secondary education. The opportunities of learning about AI should be provided for the elderly people as well as workforce generation.
Our society needs an education scheme by which anyone should be able to learn AI, mathematics, and data science beyond the boundaries of literature and science. Literacy education provides the following contents: 1) Data used by AI are usually contaminated by bias, 2) AI is easy to generate unwanted bias in its use, and 3) The issues of impartiality, fairness, and privacy protection which are inherent to actual use of AI.
In a society in which AI is widely used, the educational environment is expected to change from the current unilateral and uniform teaching style to one that matches the interests and skill level of each individual person. Therefore, the society probably shares the view that the education system will change constantly to the above mentioned education style, regardless of the success experience in the educational system of the past. In education, it is especially important to avoid dropouts. For this, it is desirable to introduce an interactive educational environment which fully utilizes AI technologies and allows students to work together to feel a kind accomplishment.
In order to develop such an educational environment, it is desirable that companies and citizens work on their own initiative, not to burden administrations and schools (teachers).
The traceability of AI systems should be ensured; it is important to log and document both the decisions made by the systems, as well as the entire process (including a description of data gathering and labelling, and a description of the algorithm used) that yielded the decisions. Linked to this, explainability of the algorithmic decision making process, adapted to the persons involved, should be provided to the extent possible. Ongoing research to develop explainability mechanisms should be pursued. In addition, explanations of the degree to which an AI system influences and shapes the organisational decision making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency).
Finally, it is important to adequately communicate the AI system’s capabilities and limitations to the different stakeholders involved in a manner appropriate to the use case at hand. Moreover, AI systems should be identifiable as such, ensuring that users know they are interacting with an AI system and which persons are responsible for it.
2. Principle of transparency
Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles
Developers should pay attention to the verifiability of inputs outputs of AI systems and the explainability of their judgments.
AI systems which are supposed to be subject to this principle are such ones that might affect the life, body, freedom, privacy, or property of users or third parties.
It is desirable that developers pay attention to the verifiability of the inputs and outputs of AI systems as well as the explainability of the judgment of AI systems within a reasonable scope in light of the characteristics of the technologies to be adopted and their use, so as to obtain the understanding and trust of the society including users of AI systems.
Note that this principle is not intended to ask developers to disclose algorithms, source codes, or learning data. In interpreting this principle, consideration to privacy and trade secrets is also required.
1. Demand That AI Systems Are Transparent
A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did.
A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity.
B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why.
C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny.
D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood.
E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being.
F. Workers must be consulted on AI systems’ implementation, development and deployment.
G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making.
The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed.
See Principle 2 below for operational solution.