(g) Security, safety, bodily and mental integrity

Publisher: European Group on Ethics in Science and New Technologies, European Commission

Safety and security of ‘autonomous’ systems materialises in three forms: (1) external safety for their environment and users, (2) reliability and internal robustness, e.g. against hacking, and (3) emotional safety with respect to human machine interaction. All dimensions of safety must be taken into account by AI developers and strictly tested before release in order to ensure that ‘autonomous’ systems do not infringe on the human right to bodily and mental integrity and a safe and secure environment. Special attention should hereby be paid to persons who find themselves in a vulnerable position. Special attention should also be paid to potential dual use and weaponisation of AI, e.g. in cybersecurity, finance, infrastructure and armed conflict.