· 16) Human Control

Humans should choose how and whether to delegate decisions to AI systems, to accomplish human chosen objectives.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

Human oversight and decision making.

Humans may occasionally choose to rely on AI systems for reasons of efficiency, but the decision to relinquish control in limited contexts will still be up to humans. Humans can rely on AI systems for decision making and task execution, but an AI system can never replace humans' ultimate responsibility and accountability.

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023

Human centred values

Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals. This principle aims to ensure that AI systems are aligned with human values. Machines should serve humans, and not the other way around. AI systems should enable an equitable and democratic society by respecting, protecting and promoting human rights, enabling diversity, respecting human freedom and the autonomy of individuals, and protecting the environment. Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights. It’s permissible to interfere with certain human rights where it’s reasonable, necessary and proportionate. All people interacting with AI systems should be able to keep full and effective control over themselves. AI systems should not undermine the democratic process, and should not undertake actions that threaten individual autonomy, like deception, unfair manipulation, unjustified surveillance, and failing to maintain alignment between a disclosed purpose and true action. AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills. Organisations designing, developing, deploying or operating AI systems should ideally hire staff from diverse backgrounds, cultures and disciplines to ensure a wide range of perspectives, and to minimise the risk of missing important considerations only noticeable by some stakeholders.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

2. Autonomy

[QUESTIONS] How can AI contribute to greater autonomy for human beings? Must we fight against the phenomenon of attention seeking which has accompanied advances in AI? Should we be worried that humans prefer the company of AI to that of other humans or animals? Can someone give informed consent when faced with increasingly complex autonomous technologies? Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision? [PRINCIPLES] ​The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

Human autonomy and oversight

United Nations system organizations should ensure that AI systems do not overrule freedom and autonomy of human beings and should guarantee human oversight. All stages of the AI system lifecycle should follow and incorporate humancentric design practices and leave meaningful opportunity for human decision making. Human oversight must ensure human capability to oversee the overall activity of the AI system and the ability to decide when and how to use the system in any particular situation, including whether to use an AI system and the ability to override a decision made by a system. As a rule, life and death decisions or other decisions affecting fundamental human rights of individuals must not be ceded to AI systems, as these decisions require human intervention.

Published by United Nations System Chief Executives Board for Coordination in Principles for the Ethical Use of Artificial Intelligence in the United Nations System, Sept 20, 2022

· Human oversight and determination

35. Member States should ensure that it is always possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities. Human oversight refers thus not only to individual human oversight, but to inclusive public oversight, as appropriate. 36. It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision making and acting, but an AI system can never replace ultimate human responsibility and accountability. As a rule, life and death decisions should not be ceded to AI systems.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021