II. Technical robustness and safety
Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. AI systems need to be reliable, secure enough to be resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and they must ensure a fall back plan in case of problems. Their decisions must be accurate, or at least correctly reflect their level of accuracy, and their outcomes should be reproducible.
In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned. This includes the minimisation and where possible the reversibility of unintended consequences or errors in the system’s operation. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be put in place.
4. Principle of safety
Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles
Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices.
It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems:
● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems.
● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices.
● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).
5 DEMOCRATIC PARTICIPATION PRINCIPLE
AIS must meet intelligibility, justiﬁability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.
1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.
2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justiﬁable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justiﬁcation consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justiﬁcation we would demand of a human making the same kind of decision.
3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for veriﬁcation and control purposes.
4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation.
5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused.
6) For public AIS that have a signiﬁcant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use.
7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for.
8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS.
9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person.
10) Artiﬁcial intelligence research should remain open and accessible to all.
We will make AI systems accountable
1. Accountability for the outcomes of an AI system lies not with the system itself but is apportioned between those who design, develop and deploy it
2. Developers should make efforts to mitigate the risks inherent in the systems they design
3. AI systems should have built in appeals procedures whereby users can challenge significant decisions
4. AI systems should be developed by diverse teams which include experts in the area in which the system will be deployed
We will make AI systems transparent
1. Developers should build systems whose failures can be traced and diagnosed
2. People should be told when significant decisions about them are being made by AI
3. Within the limits of privacy and the preservation of intellectual property, those who deploy AI systems should be transparent about the data and algorithms they use