5. Operators of AI systems shall have appropriate competencies.

When our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise.
Principle: Principles for the Governance of AI, Oct 3, 2017 (unconfirmed)

Published by The Future Society, Science, Law and Society (SLS) Initiative

Related Principles

2. We care.

We act in tune with our company values. Our systems and solutions must subordinate to humandefined rules and laws. Therefore, in addition to our technical requirements, our systems and solutions have to obey the rules and laws that we as Deutsche Telekom, our employees – and human beings as such – follow. AI systems have to meet the same high technical requirements as any other IT system of ours, such as security, robustness, etc. But since AI will be (and already is) a great part of our everyday lives, even guiding us in several areas, AI systems and their usage also have to comply with our company values (Deutsche Telekom’s Guiding Principles and Code of Conduct), ethical values, and societal conventions. We have to make sure of that.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

6. We set the framework.

Our AI solutions are developed and enhanced on grounds of deep analysis and evaluation. They are transparent, auditable, fair, and fully documented. We consciously initiate the AI’s development for the best possible outcome. The essential paradigm for our AI systems’ impact analysis is “privacy und security by design”. This is accompanied e.g. by risks and chances scenarios or reliable disaster scenarios. We take great care in the initial algorithm of our own AI solutions to prevent so called “Black Boxes” and to make sure that our systems shall not unintentionally harm the users

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

1. Purpose

The purpose of AI and cognitive systems developed and applied by the IBM company is to augment human intelligence. Our technology, products, services and policies will be designed to enhance and extend human capability, expertise and potential. Our position is based not only on principle but also on science. Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products and services by which business and society function – all of which will and should remain within human control.

Published by IBM in Principles for the Cognitive Era, Jan 17, 2017

2. Transparency

For cognitive systems to fulfill their world changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear: When and for what purposes AI is being applied in the cognitive solutions we develop and deploy. The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions. The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.

Published by IBM in Principles for the Cognitive Era, Jan 17, 2017

Third principle: Understanding

AI enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design. Effective and ethical decision making in Defence, from the frontline of combat to back office operations, is always underpinned by appropriate understanding of context by those making decisions. Defence personnel must have an appropriate, context specific understanding of the AI enabled systems they operate and work alongside. This level of understanding will naturally differ depending on the knowledge required to act ethically in a given role and with a given system. It may include an understanding of the general characteristics, benefits and limitations of AI systems. It may require knowledge of a system’s purposes and correct environment for use, including scenarios where a system should not be deployed or used. It may also demand an understanding of system performance and potential fail states. Our people must be suitably trained and competent to operate or understand these tools. To enable this understanding, we must be able to verify that our AI enabled systems work as intended. While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable. Mechanisms to interpret and understand our systems must be a crucial and explicit part of system design across the entire lifecycle. This requirement for context specific understanding based on technically understandable systems must also reach beyond the MOD, to commercial suppliers, allied forces and civilians. Whilst absolute transparency as to the workings of each AI enabled system is neither desirable nor practicable, public consent and collaboration depend on context specific shared understanding. What our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible, within the necessary constraints of the national security context.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022