3. Reward AI for ‘showing its workings’

Any AI system learning from bad examples could end up becoming socially inappropriate – we have to remember that most AI today has no cognition of what it is saying. Only broad listening and learning from diverse data sets will solve for this. One of the approaches is to develop a reward mechanism when training AI. Reinforcement learning measures should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.
Principle: The Ethics of Code: Developing AI for Business with Five Core Principles, Jun 27, 2017

Published by Sage

Related Principles

· (2) Education

In a society premised on AI, we have to eliminate disparities, divisions, or socially weak people. Therefore, policy makers and managers of the enterprises involved in AI must have an accurate understanding of AI, the knowledge for proper use of AI in society and AI ethics, taking into account the complexity of AI and the possibility that AI can be misused intentionally. The AI user should understand the outline of AI and be educated to utilize it properly because AI is much more complicated than the already developed conventional tools. On the other hand, from the viewpoint of AI’s contributions to society, it is important for the developers of AI to learn about the social sciences, business models, and ethics, including normative awareness of norms and wide range of liberal arts not to mention the basis possibly generated by AI. From the above point of view, it is necessary to establish an educational environment that provides AI literacy according to the following principles, equally to every person. In order to get rid of disparity between people having a good knowledge about AI technology and those being weak in it, opportunities for education such as AI literacy are widely provided in early childhood education and primary and secondary education. The opportunities of learning about AI should be provided for the elderly people as well as workforce generation. Our society needs an education scheme by which anyone should be able to learn AI, mathematics, and data science beyond the boundaries of literature and science. Literacy education provides the following contents: 1) Data used by AI are usually contaminated by bias, 2) AI is easy to generate unwanted bias in its use, and 3) The issues of impartiality, fairness, and privacy protection which are inherent to actual use of AI. In a society in which AI is widely used, the educational environment is expected to change from the current unilateral and uniform teaching style to one that matches the interests and skill level of each individual person. Therefore, the society probably shares the view that the education system will change constantly to the above mentioned education style, regardless of the success experience in the educational system of the past. In education, it is especially important to avoid dropouts. For this, it is desirable to introduce an interactive educational environment which fully utilizes AI technologies and allows students to work together to feel a kind accomplishment. In order to develop such an educational environment, it is desirable that companies and citizens work on their own initiative, not to burden administrations and schools (teachers).

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

Preamble

Two of Deutsche Telekom’s most important goals are to keep being a trusted companion and to enhance customer experience. We see it as our responsibility as one of the leading ICT companies in Europe to foster the development of “intelligent technologies”. At least either important, these technologies, such as AI, must follow predefined ethical rules. To define a corresponding ethical framework, firstly it needs a common understanding on what AI means. Today there are several definitions of AI, like the very first one of John McCarthy (1956) “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In line with other companies and main players in the field of AI we at DT think of AI as the imitation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self correction. After several decades, Artificial Intelligence has become one of the most intriguing topics of today – and the future. It has become widespread available and is discussed not only among experts but also more and more in public, politics, etc.. AI has started to influence business (new market opportunities as well as efficiency driver), society (e.g. broad discussion about autonomously driving vehicles or AI as “job machine” vs. “job killer”) and the life of each individual (AI already found its way into the living room, e.g. with voice steered digital assistants like smart speakers). But the use of AI and its possibilities confront us not only with fast developing technologies but as well as with the fact that our ethical roadmaps, based on human human interactions, might not be sufficient in this new era of technological influence. New questions arise and situations that were not imaginable in our daily lives then emerge. We as DT also want to develop and make use of AI. This technology can bring many benefits based on improving customer experience or simplicity. We are already in the game, e.g having several AI related projects running. With these comes an increase of digital responsibility on our side to ensure that AI is utilized in an ethical manner. So we as DT have to give answers to our customers, shareholders and stakeholders. The following Digital Ethics guidelines state how we as Deutsche Telekom want to build the future with AI. For us, technology serves one main purpose: It must act supportingly. Thus AI is in any case supposed to extend and complement human abilities rather than lessen them. Remark: The impact of AI on DT jobs – may it as a benefit and for value creation in the sense of job enrichment and enlargement or may it in the sense of efficiency is however not focus of these guidelines.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

8. We foster the cooperative model.

We believe that human and machine intelligence are complementary, with each bringing its own strength to the table. While we believe in a people first approach of human machine collaboration, we recognize, that humans can benefit from the strength of AI to unfold a potential that neither human or machine can unlock on its own. We recognize the widespread fear, that AI enabled machines will outsmart the human intelligence. We as Deutsche Telekom think differently. We know and believe in the human strengths like inspiration, intuition, sense making and empathy. But we also recognize the strengths of AI like data recall, processing speed and analysis. By combining both, AI systems will help humans to make better decisions and accomplish objectives more effective and efficient.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

(Preamble)

AI has arrived, and it is a technology that has the potential to change the world as we know it so far. It is technology that, with its development has the ability to generate an infinite amount of benefits and improve the quality of life of humanity. Similarly, AI opens the way to risky situations and raises questions about their use and their effects. At this point, it appears necessary to incorporate the ethical dimension to illuminate the development of AI, and make distinctions of its correct or incorrect use. At IA Latam, we collaboratively understand the creation of ethical criteria of self adherence that help us and guide all of us to follow the best possible path, always having as a north a better planet for the new generations. As the technologies advance, the ethical ramifications will be more relevant, where the conversation is no longer based on a "fulfill" but rather on a "we are doing the right thing and getting better". For this reason, in IA LATAM we present below our first Declaration of Ethical Principles for Latin American AI that we hope will be a starting point and a great help for all.

Published by IA Latam in Declaration Of Ethics For The Development And Use Of Artificial Intelligence (unofficial translation), Feb 8, 2019 (unconfirmed)

2. Transparent and explainable AI

We will be explicit about the kind of personal and or non personal data the AI systems uses as well as about the purpose the data is used for. When people directly interact with an AI system, we will be transparent to the users that this is the case. When AI systems take, or support, decisions we take the technical and organizational measures required to guarantee a level of understanding adequate to the application area. In any case, if the decisions significantly affect people's lives, we will ensure we understand the logic behind the conclusions. This will also apply when we use third party technology.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018