Two of Deutsche Telekom’s most important goals are to keep being a trusted companion and to enhance customer experience.
We see it as our responsibility as one of the leading ICT companies in Europe to foster the development of “intelligent technologies”. At least either important, these technologies, such as AI, must follow predefined ethical rules.
To define a corresponding ethical framework, firstly it needs a common understanding on what AI means. Today there are several definitions of AI, like the very first one of John McCarthy (1956) “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In line with other companies and main players in the field of AI we at DT think of AI as the imitation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self correction.
After several decades, Artificial Intelligence has become one of the most intriguing topics of today – and the future. It has become widespread available and is discussed not only among experts but also more and more in public, politics, etc.. AI has started to influence business (new market opportunities as well as efficiency driver), society (e.g. broad discussion about autonomously driving vehicles or AI as “job machine” vs. “job killer”) and the life of each individual (AI already found its way into the living room, e.g. with voice steered digital assistants like smart speakers).
But the use of AI and its possibilities confront us not only with fast developing technologies but as well as with the fact that our ethical roadmaps, based on human human interactions, might not be sufficient in this new era of technological influence. New questions arise and situations that were not imaginable in our daily lives then emerge.
We as DT also want to develop and make use of AI. This technology can bring many benefits based on improving customer experience or simplicity. We are already in the game, e.g having several AI related projects running. With these comes an increase of digital responsibility on our side to ensure that AI is utilized in an ethical manner. So we as DT have to give answers to our customers, shareholders and stakeholders.
The following Digital Ethics guidelines state how we as Deutsche Telekom want to build the future with AI. For us, technology serves one main purpose: It must act supportingly. Thus AI is in any case supposed to extend and complement human abilities rather than lessen them.
Remark: The impact of AI on DT jobs – may it as a benefit and for value creation in the sense of job enrichment and enlargement or may it in the sense of efficiency is however not focus of these guidelines.
1. Be socially beneficial.
The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.
AI also enhances our ability to understand the meaning of content at scale. We will strive to make high quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non commercial basis.
AI Applications We Will Not Pursue
In addition to the above objectives, we will not design or deploy AI in the following application areas:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
As our experience in this space deepens, this list may evolve.
For cognitive systems to fulfill their world changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear:
When and for what purposes AI is being applied in the cognitive solutions we develop and deploy.
The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.
The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.
2.2 Flexible Regulatory Approach
Published by: Information Technology Industry Council (ITI) in AI Policy Principles
We encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI. As applications of AI technologies vary widely, overregulating can inadvertently reduce the number of technologies created and offered in the marketplace, particularly by startups and smaller businesses. We encourage policymakers to recognize the importance of sector specific approaches as needed; one regulatory approach will not fit all AI applications. We stand ready to work with policymakers and regulators to address legitimate concerns where they occur.