(Preamble)

To contribute to that AI can extend and complement human abilities rather than lessen or restrict them, Telia Company provides the following Guiding Principles to its operations and employees for proactive design, implementation, testing, use and follow up of AI.
Principle: Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Published by Telia Company AB

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

2. Human centric

AI is used to simplify and enhance our customers’ lives. Employees’ issues are recognized and respected. We acknowledge the advantages of a cooperative and complementary model of humanmachine interactions and seek to use this in a sustainable way. Our preference and intention is for AI to extend and complement human abilities rather than lessen or restrict them.

Published by Telia Company AB in Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Preamble: Our intent for the ethical use of AI in Defence

The MOD is committed to developing and deploying AI enabled systems responsibly, in ways that build trust and consensus, setting international standards for the ethical use of AI for Defence. The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values. The MOD’s existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law, act as a foundation for Defence’s development, deployment and operation of AI enabled systems. These ethical principles do not affect or supersede existing legal obligations. Instead, they set out an ethical framework which will guide Defence’s approach to adopting AI, in line with rigorous existing codes of conduct and regulations. These principles are applicable across the full spectrum of use cases for AI in Defence, from battlespace to back office, and across the entire lifecycle of these systems.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022

(Preamble)

The Principles of Artificial Intelligence (AI) Ethics for the Intelligence Community (IC) are intended to guide personnel on whether and how to develop and use AI, to include machine learning, in furtherance of the IC’s mission. These Principles supplement the Principles of Professional Ethics for the IC and do not modify or supersede applicable laws, executive orders, or policies. Instead, they articulate the general norms that IC elements should follow in applying those authorities and requirements. To assist with the implementation of these Principles, the IC has also created an AI Ethics Framework to guide personnel who are determining whether and how to procure, design, build, use, protect, consume, and manage AI and other advanced analytics. The Intelligence Community commits to the design, development, and use of AI with the following principles:

Published by Intelligence Community (IC), United States in Principles of Artificial Intelligence Ethics for the Intelligence Community, Jul 23, 2020