· Power Seeking

No AI system should take actions to unduly increase its power and influence.
Principle: IDAIS-Beijing, May 10, 2024

Published by IDAIS (International Dialogues on AI Safety)

Related Principles

· Use Wisely and Properly

Users of AI systems should have the necessary knowledge and ability to make the system operate according to its design, and have sufficient understanding of the potential impacts to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks.

Published by Beijing Academy of Artificial Intelligence (BAAI); Peking University; Tsinghua University; Institute of Automation, Chinese Academy of Sciences; Institute of Computing Technology, Chinese Academy of Sciences; Artifical Intelligence Industry Innovation Strategy Alliance (AITISA); etc. in Beijing AI Principles, May 25, 2019

I. Human agency and oversight

AI systems should support individuals in making better, more informed choices in accordance with their goals. They should act as enablers to a flourishing and equitable society by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. The overall wellbeing of the user should be central to the system's functionality. Human oversight helps ensuring that an AI system does not undermine human autonomy or causes other adverse effects. Depending on the specific AI based system and its application area, the appropriate degrees of control measures, including the adaptability, accuracy and explainability of AI based systems, should be ensured. Oversight may be achieved through governance mechanisms such as ensuring a human in the loop, human on the loop, or human in command approach. It must be ensured that public authorities have the ability to exercise their oversight powers in line with their mandates. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

· 17) Non subversion

The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

Principle 7 – Accountability & Responsibility

The accountability and responsibility principle holds designers, vendors, procurers, developers, owners and assessors of AI systems and the technology itself ethically responsible and liable for the decisions and actions that may result in potential risk and negative effects on individuals and communities. Human oversight, governance, and proper management should be demonstrated across the entire AI System Lifecycle to ensure that proper mechanisms are in place to avoid harm and misuse of this technology. AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice. The designers, developers, and people who implement the AI system should be identifiable and assume responsibility and accountability for any potential damage the technology has on individuals or communities, even if the adverse impact is unintended. The liable parties should take necessary preventive actions as well as set risk assessment and mitigation strategy to minimize the harm due to the AI system. The accountability and responsibility principle is closely related to the fairness principle. The parties responsible for the AI system should ensure that the fairness of the system is maintained and sustained through control mechanisms. All parties involved in the AI System Lifecycle should consider and action these values in their decisions and execution.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

Human autonomy and oversight

United Nations system organizations should ensure that AI systems do not overrule freedom and autonomy of human beings and should guarantee human oversight. All stages of the AI system lifecycle should follow and incorporate humancentric design practices and leave meaningful opportunity for human decision making. Human oversight must ensure human capability to oversee the overall activity of the AI system and the ability to decide when and how to use the system in any particular situation, including whether to use an AI system and the ability to override a decision made by a system. As a rule, life and death decisions or other decisions affecting fundamental human rights of individuals must not be ceded to AI systems, as these decisions require human intervention.

Published by United Nations System Chief Executives Board for Coordination in Principles for the Ethical Use of Artificial Intelligence in the United Nations System, Sept 20, 2022