(g) Security, safety, bodily and mental integrity

Safety and security of ‘autonomous’ systems materialises in three forms: (1) external safety for their environment and users, (2) reliability and internal robustness, e.g. against hacking, and (3) emotional safety with respect to human machine interaction. All dimensions of safety must be taken into account by AI developers and strictly tested before release in order to ensure that ‘autonomous’ systems do not infringe on the human right to bodily and mental integrity and a safe and secure environment. Special attention should hereby be paid to persons who find themselves in a vulnerable position. Special attention should also be paid to potential dual use and weaponisation of AI, e.g. in cybersecurity, finance, infrastructure and armed conflict.
Principle: Ethical principles and democratic prerequisites, Mar 9, 2018

Published by European Group on Ethics in Science and New Technologies, European Commission

Related Principles

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

II. Technical robustness and safety

Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. AI systems need to be reliable, secure enough to be resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and they must ensure a fall back plan in case of problems. Their decisions must be accurate, or at least correctly reflect their level of accuracy, and their outcomes should be reproducible. In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned. This includes the minimisation and where possible the reversibility of unintended consequences or errors in the system’s operation. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be put in place.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

Responsible Deployment

Principle: The capacity of an AI agent to act autonomously, and to adapt its behavior over time without human direction, calls for significant safety checks before deployment, and ongoing monitoring. Recommendations: Humans must be in control: Any autonomous system must allow for a human to interrupt an activity or shutdown the system (an “off switch”). There may also be a need to incorporate human checks on new decision making strategies in AI system design, especially where the risk to human life and safety is great. Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI agent’s safe interaction with its environment (digital or physical) and that it functions as intended. Autonomous systems should be monitored while in operation, and updated or corrected as needed. Privacy is key: AI systems must be data responsible. They should use only what they need and delete it when it is no longer needed (“data minimization”). They should encrypt data in transit and at rest, and restrict access to authorized persons (“access control”). AI systems should only collect, use, share and store data in accordance with privacy and personal data laws and best practices. Think before you act: Careful thought should be given to the instructions and data provided to AI systems. AI systems should not be trained with data that is biased, inaccurate, incomplete or misleading. If they are connected, they must be secured: AI systems that are connected to the Internet should be secured not only for their protection, but also to protect the Internet from malfunctioning or malware infected AI systems that could become the next generation of botnets. High standards of device, system and network security should be applied. Responsible disclosure: Security researchers acting in good faith should be able to responsibly test the security of AI systems without fear of prosecution or other legal action. At the same time, researchers and others who discover security vulnerabilities or other design flaws should responsibly disclose their findings to those who are in the best position to fix the problem.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

8 PRUDENCE PRINCIPLE

Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them. 1) It is necessary to develop mechanisms that consider the potential for the double use — beneficial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses. 2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm. 3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders. 4) The development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data. 5) The errors and flaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a significant danger to personal integrity and social organization.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

4. Adopt a Human In Command Approach

An absolute precondition is that the development of AI must be responsible, safe and useful, where machines maintain the legal status of tools, and legal persons retain control over, and responsibility for, these machines at all times. This entails that AI systems should be designed and operated to comply with existing law, including privacy. Workers should have the right to access, manage and control the data AI systems generate, given said systems’ power to analyse and utilize that data (See principle 1 in “Top 10 principles for workers’ data privacy and protection”). Workers must also have the ‘right of explanation’ when AI systems are used in human resource procedures, such as recruitment, promotion or dismissal.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017