2. Long Term Safety

We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value aligned, safety conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case by case agreements, but a typical triggering condition might be “a better than even chance of success in the next two years.”
Principle: OpenAI Charter, Apr 9, 2018

Published by OpenAI

Related Principles

Preamble

Two of Deutsche Telekom’s most important goals are to keep being a trusted companion and to enhance customer experience. We see it as our responsibility as one of the leading ICT companies in Europe to foster the development of “intelligent technologies”. At least either important, these technologies, such as AI, must follow predefined ethical rules. To define a corresponding ethical framework, firstly it needs a common understanding on what AI means. Today there are several definitions of AI, like the very first one of John McCarthy (1956) “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In line with other companies and main players in the field of AI we at DT think of AI as the imitation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self correction. After several decades, Artificial Intelligence has become one of the most intriguing topics of today – and the future. It has become widespread available and is discussed not only among experts but also more and more in public, politics, etc.. AI has started to influence business (new market opportunities as well as efficiency driver), society (e.g. broad discussion about autonomously driving vehicles or AI as “job machine” vs. “job killer”) and the life of each individual (AI already found its way into the living room, e.g. with voice steered digital assistants like smart speakers). But the use of AI and its possibilities confront us not only with fast developing technologies but as well as with the fact that our ethical roadmaps, based on human human interactions, might not be sufficient in this new era of technological influence. New questions arise and situations that were not imaginable in our daily lives then emerge. We as DT also want to develop and make use of AI. This technology can bring many benefits based on improving customer experience or simplicity. We are already in the game, e.g having several AI related projects running. With these comes an increase of digital responsibility on our side to ensure that AI is utilized in an ethical manner. So we as DT have to give answers to our customers, shareholders and stakeholders. The following Digital Ethics guidelines state how we as Deutsche Telekom want to build the future with AI. For us, technology serves one main purpose: It must act supportingly. Thus AI is in any case supposed to extend and complement human abilities rather than lessen them. Remark: The impact of AI on DT jobs – may it as a benefit and for value creation in the sense of job enrichment and enlargement or may it in the sense of efficiency is however not focus of these guidelines.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

9. We share and enlighten.

We acknowledge the transformative power of AI for our society. We will support people and society in preparing for this future world. We live our digital responsibility by sharing our knowledge, pointing out the opportunities of the new technology without neglecting its risks. We will engage with our customers, other companies, policy makers, education institutions and all other stakeholders to ensure we understand their concerns and needs and can setup the right safeguards. We will engage in AI and ethics education. Hereby preparing ourselves, our colleagues and our fellow human beings for the new tasks ahead. Many tasks that are being executed by humans now will be automated in the future. This leads to a shift in the demand of skills. Jobs will be reshaped, rather replaced by AI. While this seems certain, the minority knows what exactly AI technology is capable of achieving. Prejudice and sciolism lead to either demonization of progress or to blind acknowledgment, both calling for educational work. We as Deutsche Telekom feel responsible to enlighten people and help society to deal with the digital shift, so that new appropriate skills can be developed and new jobs can be taken over. And we start from within – by enabling our colleagues and employees. But we are aware that this task cannot be solved by one company alone. Therefore we will engage in partnerships with other companies, offer our know how to policy makers and education providers to jointly tackle the challenges ahead.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

(Preamble)

AI has arrived, and it is a technology that has the potential to change the world as we know it so far. It is technology that, with its development has the ability to generate an infinite amount of benefits and improve the quality of life of humanity. Similarly, AI opens the way to risky situations and raises questions about their use and their effects. At this point, it appears necessary to incorporate the ethical dimension to illuminate the development of AI, and make distinctions of its correct or incorrect use. At IA Latam, we collaboratively understand the creation of ethical criteria of self adherence that help us and guide all of us to follow the best possible path, always having as a north a better planet for the new generations. As the technologies advance, the ethical ramifications will be more relevant, where the conversation is no longer based on a "fulfill" but rather on a "we are doing the right thing and getting better". For this reason, in IA LATAM we present below our first Declaration of Ethical Principles for Latin American AI that we hope will be a starting point and a great help for all.

Published by IA Latam in Declaration Of Ethics For The Development And Use Of Artificial Intelligence (unofficial translation), Feb 8, 2019 (unconfirmed)

3. Principle of controllability

Developers should pay attention to the controllability of AI systems. [Comment] In order to assess the risks related to the controllability of AI systems, it is encouraged that developers make efforts to conduct verification and validation in advance. One of the conceivable methods of risk assessment is to conduct experiments in a closed space such as in a laboratory or a sandbox in which security is ensured, at a stage before the practical application in society. In addition, in order to ensure the controllability of AI systems, it is encouraged that developers pay attention to whether the supervision (such as monitoring or warnings) and countermeasures (such as system shutdown, cut off from networks, or repairs) by humans or other trustworthy AI systems are effective, to the extent possible in light of the characteristics of the technologies to be adopted. [Note] Verification and validation are methods for evaluating and controlling risks in advance. Generally, the former is used for confirming formal consistency, while the latter is used for confirming substantial validity. (See, e.g., The Future of Life Institute (FLI), Research Priorities for Robust and Beneficial Artificial Intelligence (2015)). [Note] Examples of what to see in the risk assessment are risks of reward hacking in which AI systems formally achieve the goals assigned but substantially do not meet the developer's intents, and risks that AI systems work in ways that the developers have not intended due to the changes of their outputs and programs in the process of the utilization with their learning, etc. For reward hacking, see, e.g., Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman & Dan Mané, Concrete Problems in AI Safety, arXiv: 1606.06565 [cs.AI] (2016).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

2. Transparent and explainable AI

We will be explicit about the kind of personal and or non personal data the AI systems uses as well as about the purpose the data is used for. When people directly interact with an AI system, we will be transparent to the users that this is the case. When AI systems take, or support, decisions we take the technical and organizational measures required to guarantee a level of understanding adequate to the application area. In any case, if the decisions significantly affect people's lives, we will ensure we understand the logic behind the conclusions. This will also apply when we use third party technology.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018