To contribute to that AI can extend and complement human abilities rather than lessen or restrict them, Telia Company provides the following Guiding Principles to its operations and employees for proactive design, implementation, testing, use and follow up of AI.
Principle: Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Published by Telia Company AB

Related Principles

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

· 4. Governance of AI Autonomy (Human oversight)

The correct approach to assuring properties such as safety, accuracy, adaptability, privacy, explicability, compliance with the rule of law and ethical conformity heavily depends on specific details of the AI system, its area of application, its level of impact on individuals, communities or society and its level of autonomy. The level of autonomy results from the use case and the degree of sophistication needed for a task. All other things being equal, the greater degree of autonomy that is given to an AI system, the more extensive testing and stricter governance is required. It must be ensured that AI systems continue to behave as intended when feedback signals become sparser. Depending on the area of application and or the level of impact on individuals, communities or society of the AI system, different levels or instances of governance (incl. human oversight) will be necessary. This is relevant for a large number of AI applications, and more particularly for the use of AI to suggest or take decisions concerning individuals or communities (algorithmic decision support). Good governance of AI autonomy in this respect includes for instance more or earlier human intervention depending on the level of societal impact of the AI system. This also includes the predicament that a user of an AI system, particularly in a work or decision making environment, is allowed to deviate from a path or decision chosen or recommended by the AI system.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

9. Principle of accountability

Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users. [Comment] Developers are expected to fulfill their accountability for AI systems they have developed to gain users’ trust in AI systems. Specifically, it is encouraged that developers make efforts to provide users with the information that can help their choice and utilization of AI systems. In addition, in order to improve the acceptance of AI systems by the society including users, it is also encouraged that, taking into account the R&D principles (1) to (8) set forth in the Guidelines, developers make efforts: (a) to provide users et al. with both information and explanations about the technical characteristics of the AI systems they have developed; and (b) to gain active involvement of stakeholders (such as their feedback) in such manners as to hear various views through dialogues with diverse stakeholders. Moreover, it is advisable that developers make efforts to share the information and cooperate with providers et al. who offer services with the AI systems they have developed on their own.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

6. Pursuit of Transparency

During the planning and design stages for its products and services that utilize AI, Sony will strive to introduce methods of capturing the reasoning behind the decisions made by AI utilized in said products and services. Additionally, it will endeavor to provide intelligible explanations and information to customers about the possible impact of using these products and services.

Published by Sony Group in Sony Group AI Ethics Guidelines, Sep 25, 2018

2. Human centric

AI is used to simplify and enhance our customers’ lives. Employees’ issues are recognized and respected. We acknowledge the advantages of a cooperative and complementary model of humanmachine interactions and seek to use this in a sustainable way. Our preference and intention is for AI to extend and complement human abilities rather than lessen or restrict them.

Published by Telia Company AB in Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019