Key requirements for trustworthy AI

Principle: Key requirements for trustworthy AI, Apr 8, 2019

Published by European Commission

Related Principles

Draft Ethics Guidelines for Trustworthy AI

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

Requirements of Trustworthy AI

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

4

Reaching adequate safety levels for advanced AI will also require immense research progress. Advanced AI systems must be demonstrably aligned with their designer’s intent, as well as appropriate norms and values. They must also be robust against both malicious actors and rare failure modes. Sufficient human control needs to be ensured for these systems. Concerted effort by the global research community in both AI and other disciplines is essential; we need a global network of dedicated AI safety research and governance institutions. We call on leading AI developers to make a minimum spending commitment of one third of their AI R&D on AI safety and for government agencies to fund academic and non profit AI safety and governance research in at least the same proportion.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Oxford, Oct 31, 2023

10 key requirements

Published by The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI) in National AI Ethical Guidelines (draft), Nov 27, 2020

Requirements for child-centred AI

Published by United Nations Children's Fund (UNICEF) and the Ministry of in Requirements for child-centred AI, Sep 16, 2020