(Conclusion)

We have left some of the terms above purposefully under specified to allow these principles to be broadly applicable. Applying these principles well should include understanding them within a specific context. We also suggest that these issues be revisited and discussed throughout the design, implementation, and release phases of development. Two important principles for consideration were purposefully left off of this list as they are well covered elsewhere: privacy and the impact of human experimentation. We encourage you to incorporate those issues into your overall assessment of algorithmic accountability as well.
Principle: Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

Preamble

Two of Deutsche Telekom’s most important goals are to keep being a trusted companion and to enhance customer experience. We see it as our responsibility as one of the leading ICT companies in Europe to foster the development of “intelligent technologies”. At least either important, these technologies, such as AI, must follow predefined ethical rules. To define a corresponding ethical framework, firstly it needs a common understanding on what AI means. Today there are several definitions of AI, like the very first one of John McCarthy (1956) “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In line with other companies and main players in the field of AI we at DT think of AI as the imitation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self correction. After several decades, Artificial Intelligence has become one of the most intriguing topics of today – and the future. It has become widespread available and is discussed not only among experts but also more and more in public, politics, etc.. AI has started to influence business (new market opportunities as well as efficiency driver), society (e.g. broad discussion about autonomously driving vehicles or AI as “job machine” vs. “job killer”) and the life of each individual (AI already found its way into the living room, e.g. with voice steered digital assistants like smart speakers). But the use of AI and its possibilities confront us not only with fast developing technologies but as well as with the fact that our ethical roadmaps, based on human human interactions, might not be sufficient in this new era of technological influence. New questions arise and situations that were not imaginable in our daily lives then emerge. We as DT also want to develop and make use of AI. This technology can bring many benefits based on improving customer experience or simplicity. We are already in the game, e.g having several AI related projects running. With these comes an increase of digital responsibility on our side to ensure that AI is utilized in an ethical manner. So we as DT have to give answers to our customers, shareholders and stakeholders. The following Digital Ethics guidelines state how we as Deutsche Telekom want to build the future with AI. For us, technology serves one main purpose: It must act supportingly. Thus AI is in any case supposed to extend and complement human abilities rather than lessen them. Remark: The impact of AI on DT jobs – may it as a benefit and for value creation in the sense of job enrichment and enlargement or may it in the sense of efficiency is however not focus of these guidelines.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

1. We are driven by our values

We recognize that, like with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing. In developing AI software we will remain true to our Human Rights Commitment Statement, the UN Guiding Principles on Business and Human Rights, laws, and widely accepted international norms. Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavor to prevent the inappropriate use of our technology.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

(Preamble)

New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018

Third principle: Understanding

AI enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design. Effective and ethical decision making in Defence, from the frontline of combat to back office operations, is always underpinned by appropriate understanding of context by those making decisions. Defence personnel must have an appropriate, context specific understanding of the AI enabled systems they operate and work alongside. This level of understanding will naturally differ depending on the knowledge required to act ethically in a given role and with a given system. It may include an understanding of the general characteristics, benefits and limitations of AI systems. It may require knowledge of a system’s purposes and correct environment for use, including scenarios where a system should not be deployed or used. It may also demand an understanding of system performance and potential fail states. Our people must be suitably trained and competent to operate or understand these tools. To enable this understanding, we must be able to verify that our AI enabled systems work as intended. While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable. Mechanisms to interpret and understand our systems must be a crucial and explicit part of system design across the entire lifecycle. This requirement for context specific understanding based on technically understandable systems must also reach beyond the MOD, to commercial suppliers, allied forces and civilians. Whilst absolute transparency as to the workings of each AI enabled system is neither desirable nor practicable, public consent and collaboration depend on context specific shared understanding. What our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible, within the necessary constraints of the national security context.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022