9. Continuous review and dialogue

We acknowledge that the area of AI is complex and developing fast. We want to apply relevant industry best practice and participate in dialogues with other stakeholders for further development of AI and AI ethics, and of these Guiding Principles.
Principle: Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Published by Telia Company AB

Related Principles

Preamble

Two of Deutsche Telekom’s most important goals are to keep being a trusted companion and to enhance customer experience. We see it as our responsibility as one of the leading ICT companies in Europe to foster the development of “intelligent technologies”. At least either important, these technologies, such as AI, must follow predefined ethical rules. To define a corresponding ethical framework, firstly it needs a common understanding on what AI means. Today there are several definitions of AI, like the very first one of John McCarthy (1956) “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In line with other companies and main players in the field of AI we at DT think of AI as the imitation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self correction. After several decades, Artificial Intelligence has become one of the most intriguing topics of today – and the future. It has become widespread available and is discussed not only among experts but also more and more in public, politics, etc.. AI has started to influence business (new market opportunities as well as efficiency driver), society (e.g. broad discussion about autonomously driving vehicles or AI as “job machine” vs. “job killer”) and the life of each individual (AI already found its way into the living room, e.g. with voice steered digital assistants like smart speakers). But the use of AI and its possibilities confront us not only with fast developing technologies but as well as with the fact that our ethical roadmaps, based on human human interactions, might not be sufficient in this new era of technological influence. New questions arise and situations that were not imaginable in our daily lives then emerge. We as DT also want to develop and make use of AI. This technology can bring many benefits based on improving customer experience or simplicity. We are already in the game, e.g having several AI related projects running. With these comes an increase of digital responsibility on our side to ensure that AI is utilized in an ethical manner. So we as DT have to give answers to our customers, shareholders and stakeholders. The following Digital Ethics guidelines state how we as Deutsche Telekom want to build the future with AI. For us, technology serves one main purpose: It must act supportingly. Thus AI is in any case supposed to extend and complement human abilities rather than lessen them. Remark: The impact of AI on DT jobs – may it as a benefit and for value creation in the sense of job enrichment and enlargement or may it in the sense of efficiency is however not focus of these guidelines.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

· 2.5. International co operation for trustworthy AI

a) Governments, including developing countries and with stakeholders, should actively cooperate to advance these principles and to progress on responsible stewardship of trustworthy AI. b) Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, crosssectoral and open multi stakeholder initiatives to garner long term expertise on AI. c) Governments should promote the development of multi stakeholder, consensus driven global technical standards for interoperable and trustworthy AI. d) Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

9. Principle of accountability

Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users. [Comment] Developers are expected to fulfill their accountability for AI systems they have developed to gain users’ trust in AI systems. Specifically, it is encouraged that developers make efforts to provide users with the information that can help their choice and utilization of AI systems. In addition, in order to improve the acceptance of AI systems by the society including users, it is also encouraged that, taking into account the R&D principles (1) to (8) set forth in the Guidelines, developers make efforts: (a) to provide users et al. with both information and explanations about the technical characteristics of the AI systems they have developed; and (b) to gain active involvement of stakeholders (such as their feedback) in such manners as to hear various views through dialogues with diverse stakeholders. Moreover, it is advisable that developers make efforts to share the information and cooperate with providers et al. who offer services with the AI systems they have developed on their own.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

· 2.5. International co operation for trustworthy AI

a) Governments, including developing countries and with stakeholders, should actively cooperate to advance these principles and to progress on responsible stewardship of trustworthy AI. b) Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, crosssectoral and open multi stakeholder initiatives to garner long term expertise on AI. c) Governments should promote the development of multi stakeholder, consensus driven global technical standards for interoperable and trustworthy AI. d) Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.

Published by The Organisation for Economic Co-operation and Development (OECD) in OECD Principles on Artificial Intelligence, May 22, 2019

1. We are driven by our values

We recognize that, like with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing. In developing AI software we will remain true to our Human Rights Commitment Statement, the UN Guiding Principles on Business and Human Rights, laws, and widely accepted international norms. Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavor to prevent the inappropriate use of our technology.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018