The principle "Draft Ethics Guidelines for Trustworthy AI" has mentioned the topic "security" in the following places:
2. The Principle of Non maleficence: “Do no Harm”
By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work.
Trustworthy AI requires that algorithms are secure, reliable as well as robust enough to deal with errors or inconsistencies during the design, development, execution, deployment and use phase of the AI system, and to adequately cope with erroneous outcomes.
A secure AI has safeguards that enable a fall back plan in case of problems with the AI system.