Democracy

Digital technologies are of systemic relevance to the flourishing of democracy. They make it possible to shape new forms of political participation, but they also foster the emergence of threats such as manipulation and radicalisation.
Principle: Opinion of the Data Ethics Commission: General ethical and legal principles, Oct 10, 2019

Published by Data Ethics Commission, Germany

Related Principles

Preamble

Two of Deutsche Telekom’s most important goals are to keep being a trusted companion and to enhance customer experience. We see it as our responsibility as one of the leading ICT companies in Europe to foster the development of “intelligent technologies”. At least either important, these technologies, such as AI, must follow predefined ethical rules. To define a corresponding ethical framework, firstly it needs a common understanding on what AI means. Today there are several definitions of AI, like the very first one of John McCarthy (1956) “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In line with other companies and main players in the field of AI we at DT think of AI as the imitation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self correction. After several decades, Artificial Intelligence has become one of the most intriguing topics of today – and the future. It has become widespread available and is discussed not only among experts but also more and more in public, politics, etc.. AI has started to influence business (new market opportunities as well as efficiency driver), society (e.g. broad discussion about autonomously driving vehicles or AI as “job machine” vs. “job killer”) and the life of each individual (AI already found its way into the living room, e.g. with voice steered digital assistants like smart speakers). But the use of AI and its possibilities confront us not only with fast developing technologies but as well as with the fact that our ethical roadmaps, based on human human interactions, might not be sufficient in this new era of technological influence. New questions arise and situations that were not imaginable in our daily lives then emerge. We as DT also want to develop and make use of AI. This technology can bring many benefits based on improving customer experience or simplicity. We are already in the game, e.g having several AI related projects running. With these comes an increase of digital responsibility on our side to ensure that AI is utilized in an ethical manner. So we as DT have to give answers to our customers, shareholders and stakeholders. The following Digital Ethics guidelines state how we as Deutsche Telekom want to build the future with AI. For us, technology serves one main purpose: It must act supportingly. Thus AI is in any case supposed to extend and complement human abilities rather than lessen them. Remark: The impact of AI on DT jobs – may it as a benefit and for value creation in the sense of job enrichment and enlargement or may it in the sense of efficiency is however not focus of these guidelines.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

(e) Democracy

Key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. A spirit of global cooperation and public dialogue on the issue will ensure that they are taken in an inclusive, informed, and farsighted manner. The right to receive education or access information on new technologies and their ethical implications will facilitate that everyone understands risks and opportunities and is empowered to participate in decisional processes that crucially shape our future. The principles of human dignity and autonomy centrally involve the human right to self determination through the means of democracy. Of key importance to our democratic political systems are value pluralism, diversity and accommodation of a variety of conceptions of the good life of citizens. They must not be jeopardised, subverted or equalised by new technologies that inhibit or influence political decision making and infringe on the freedom of expression and the right to receive and impart information without interference. Digital technologies should rather be used to harness collective intelligence and support and improve the civic processes on which our democratic societies depend.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

PREAMBLE

For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks of which natural intelligence alone was thought capable: processing large quantities of information, calculating and predicting, learning and adapting responses to changing situations, and recognizing and classifying objects. Given the immaterial nature of these tasks, and by analogy with human intelligence, we designate these wide ranging systems under the general name of artificial intelligence. Artificial intelligence constitutes a major form of scientific and technological progress, which can generate considerable social benefits by improving living conditions and health, facilitating justice, creating wealth, bolstering public safety, and mitigating the impact of human activities on the environment and the climate. Intelligent machines are not limited to performing better calculations than human beings; they can also interact with sentient beings, keep them company and take care of them. However, the development of artificial intelligence does pose major ethical challenges and social risks. Indeed, intelligent machines can restrict the choices of individuals and groups, lower living standards, disrupt the organization of labor and the job market, influence politics, clash with fundamental rights, exacerbate social and economic inequalities, and affect ecosystems, the climate and the environment. Although scientific progress, and living in a society, always carry a risk, it is up to the citizens to determine the moral and political ends that give meaning to the risks encountered in an uncertain world. The lower the risks of its deployment, the greater the benefits of artificial intelligence will be. The first danger of artificial intelligence development consists in giving the illusion that we can master the future through calculations. Reducing society to a series of numbers and ruling it through algorithmic procedures is an old pipe dream that still drives human ambitions. But when it comes to human affairs, tomorrow rarely resembles today, and numbers cannot determine what has moral value, nor what is socially desirable. The principles of the current declaration are like points on a moral compass that will help guide the development of artificial intelligence towards morally and socially desirable ends. They also offer an ethical framework that promotes internationally recognized human rights in the fields affected by the rollout of artificial intelligence. Taken as a whole, the principles articulated lay the foundation for cultivating social trust towards artificially intelligent systems. The principles of the current declaration rest on the common belief that human beings seek to grow as social beings endowed with sensations, thoughts and feelings, and strive to fulfill their potential by freely exercising their emotional, moral and intellectual capacities. It is incumbent on the various public and private stakeholders and policymakers at the local, national and international level to ensure that the development and deployment of artificial intelligence are compatible with the protection of fundamental human capacities and goals, and contribute toward their fuller realization. With this goal in mind, one must interpret the proposed principles in a coherent manner, while taking into account the specific social, cultural, political and legal contexts of their application.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

7. We engage with the wider societal challenges of AI

While we have control, to a large extent, over the preceding areas, there are numerous emerging challenges that require a much broader discourse across industries, disciplines, borders, and cultural, philosophical, and religious traditions. These include, but are not limited to, questions concerning: Economic impact, such as how industry and society can collaborate to prepare students and workers for an AI economy and how society may need to adapt means of economic redistribution, social safety, and economic development. Social impact, such as the value and meaning of work for people and the potential role of AI software as social companions and caretakers. Normative questions around how AI should confront ethical dilemmas and what applications of AI, specifically with regards to security and safety, should be considered permissible. We look forward to making SAP one of many active voices in these debates by engaging with our AI Ethics Advisory Panel and a wide range of partnerships and initiatives.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

3.4 Fairness

The principle of fairness refers to the protection of rights and integrity against discrimination, especially discrimination of particularly sensitive categorie.(for example, persons with disabilities). The term itself due to its versatility, it has different interpretations in numerous spheres of social life. For example, in health care, the principle of fairness implies the prohibition of discrimination based on race, sex, gender, sexual orientation and gender identity, age, nationality, social origin, religion, political or other conviction, financial status, culture, language, health status, type of illness, mental or physical disabilty, as well as other personal characteristics that maybe the cause of discrimination. Likewise, artificial intelligence systems must prevent discrimination when in use. The principle of fairness has its real (eng. substantive) and procedural dimension. The real dimension includes protection against unjustified bias, discrimination and stigmatization. Artificial intelligencesystems should provide equal opportunities to all persons, both in terms of access to education, goodsand services and technologies, as well as to prevent deception of persons using artificial intelligencesystems, when making decisions. The procedural dimension of fairness includes the ability to challenge and include effective legal protection against decisions resulting from the operation of the Artificial Intelligence System As well as persons responsible for the operation of the System. In order to fulfill this condition, it is necessary that they exist clearly defined responsibilities, as well as for the decision making process to be explained, clear andtransparent. This reduces the possibility of misunderstanding or incomplete understanding of the purpose and goals of using these systems, that is, the potential denial of freedom of choice when choosing the system to use. The fair use of Artificial lnteligence Systems can lead to an increase in fairness in society as a whole, as well as to a reduction of the differences that exist between individuals in terms of social, economic and educational status

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023