5. Principle 5 — A IS Technology Misuse and Awareness of It

Issue: How can we extend the benefits and minimize the risks of A IS technology being misused? [Candidate Recommendations] Raise public awareness around the issues of potential A IS technology misuse in an informed and measured way by: 1. Providing ethics education and security awareness that sensitizes society to the potential risks of misuse of A IS (e.g., by providing “data privacy” warnings that some smart devices will collect their user’s personal data). 2. Delivering this education in scalable and effective ways, beginning with those having the greatest credibility and impact that also minimize generalized (e.g., non productive) fear about A IS (e.g., via credible research institutions or think tanks via social media such as Facebook or YouTube). 3. Educating government, lawmakers, and enforcement agencies surrounding these issues so citizens work collaboratively with them to avoid fear or confusion (e.g., in the same way police officers have given public safety lectures in schools for years; in the near future they could provide workshops on safe A IS).
Principle: Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Related Principles

· (2) Education

In a society premised on AI, we have to eliminate disparities, divisions, or socially weak people. Therefore, policy makers and managers of the enterprises involved in AI must have an accurate understanding of AI, the knowledge for proper use of AI in society and AI ethics, taking into account the complexity of AI and the possibility that AI can be misused intentionally. The AI user should understand the outline of AI and be educated to utilize it properly because AI is much more complicated than the already developed conventional tools. On the other hand, from the viewpoint of AI’s contributions to society, it is important for the developers of AI to learn about the social sciences, business models, and ethics, including normative awareness of norms and wide range of liberal arts not to mention the basis possibly generated by AI. From the above point of view, it is necessary to establish an educational environment that provides AI literacy according to the following principles, equally to every person. In order to get rid of disparity between people having a good knowledge about AI technology and those being weak in it, opportunities for education such as AI literacy are widely provided in early childhood education and primary and secondary education. The opportunities of learning about AI should be provided for the elderly people as well as workforce generation. Our society needs an education scheme by which anyone should be able to learn AI, mathematics, and data science beyond the boundaries of literature and science. Literacy education provides the following contents: 1) Data used by AI are usually contaminated by bias, 2) AI is easy to generate unwanted bias in its use, and 3) The issues of impartiality, fairness, and privacy protection which are inherent to actual use of AI. In a society in which AI is widely used, the educational environment is expected to change from the current unilateral and uniform teaching style to one that matches the interests and skill level of each individual person. Therefore, the society probably shares the view that the education system will change constantly to the above mentioned education style, regardless of the success experience in the educational system of the past. In education, it is especially important to avoid dropouts. For this, it is desirable to introduce an interactive educational environment which fully utilizes AI technologies and allows students to work together to feel a kind accomplishment. In order to develop such an educational environment, it is desirable that companies and citizens work on their own initiative, not to burden administrations and schools (teachers).

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

Many of the hopes and the fears presently associated with AI are out of step with reality. The public and policymakers alike have a responsibility to understand the capabilities and limitations of this technology as it becomes an increasing part of our daily lives. This will require an awareness of when and where this technology is being deployed. Access to large quantities of data is one of the factors fuelling the current AI boom. The ways in which data is gathered and accessed need to be reconsidered, so that innovative companies, big and small, have fair and reasonable access to data, while citizens and consumers can also protect their privacy and personal agency in this changing world. Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies operating in the UK.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

Public Empowerment

Principle: The public’s ability to understand AI enabled services, and how they work, is key to ensuring trust in the technology. Recommendations: “Algorithmic Literacy” must be a basic skill: Whether it is the curating of information in social media platforms or self driving cars, users need to be aware and have a basic understanding of the role of algorithms and autonomous decision making. Such skills will also be important in shaping societal norms around the use of the technology. For example, identifying decisions that may not be suitable to delegate to an AI. Provide the public with information: While full transparency around a service’s machine learning techniques and training data is generally not advisable due to the security risk, the public should be provided with enough information to make it possible for people to question its outcomes.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

8 PRUDENCE PRINCIPLE

Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them. 1) It is necessary to develop mechanisms that consider the potential for the double use — beneficial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses. 2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm. 3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders. 4) The development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data. 5) The errors and flaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a significant danger to personal integrity and social organization.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018