6. Democracy

[QUESTIONS] How should AI research and its applications, at the institutional level, be controlled? In what areas would this be most pertinent? Who should decide, and according to which modalities, the norms and moral values determining this control? Who should establish ethical guidelines for self driving cars? Should ethical labeling that respects certain standards be developed for AI, websites and businesses? [PRINCIPLES] ​The development of AI should promote informed participation in public life, cooperation and democratic debate.
Principle: The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

Published by University of Montreal, Forum on the Socially Responsible Development of AI

Related Principles

· (6) Fairness, Accountability, and Transparency

Under the "AI Ready society", when using AI, fair and transparent decision making and accountability for the results should be appropriately ensured, and trust in technology should be secured, in order that people using AI will not be discriminated on the ground of the person's background or treated unjustly in light of human dignity. Under the AI design concept, all people must be treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, sex, nationality, age, political beliefs, religion, etc. Appropriate explanations should be provided such as the fact that AI is being used, the method of obtaining and using the data used in AI, and the mechanism to ensure the appropriateness of the operation results of AI according to the situation AI is used. In order for people to understand and judge AI proposals, there should be appropriate opportunities for open dialogue on the use, adoption and operation of AI, as needed. In order to ensure the above viewpoints and to utilize AI safely in society, a mechanism must be established to secure trust in AI and its using data.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

(c) Responsibility

The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals. Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

(e) Democracy

Key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. A spirit of global cooperation and public dialogue on the issue will ensure that they are taken in an inclusive, informed, and farsighted manner. The right to receive education or access information on new technologies and their ethical implications will facilitate that everyone understands risks and opportunities and is empowered to participate in decisional processes that crucially shape our future. The principles of human dignity and autonomy centrally involve the human right to self determination through the means of democracy. Of key importance to our democratic political systems are value pluralism, diversity and accommodation of a variety of conceptions of the good life of citizens. They must not be jeopardised, subverted or equalised by new technologies that inhibit or influence political decision making and infringe on the freedom of expression and the right to receive and impart information without interference. Digital technologies should rather be used to harness collective intelligence and support and improve the civic processes on which our democratic societies depend.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

Public Empowerment

Principle: The public’s ability to understand AI enabled services, and how they work, is key to ensuring trust in the technology. Recommendations: “Algorithmic Literacy” must be a basic skill: Whether it is the curating of information in social media platforms or self driving cars, users need to be aware and have a basic understanding of the role of algorithms and autonomous decision making. Such skills will also be important in shaping societal norms around the use of the technology. For example, identifying decisions that may not be suitable to delegate to an AI. Provide the public with information: While full transparency around a service’s machine learning techniques and training data is generally not advisable due to the security risk, the public should be provided with enough information to make it possible for people to question its outcomes.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

7. Principle of ethics

Developers should respect human dignity and individual autonomy in R&D of AI systems. [Comment] It is encouraged that, when developing AI systems that link with the human brain and body, developers pay particularly due consideration to respecting human dignity and individual autonomy, in light of discussions on bioethics, etc. It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems. It is advisable that developers take precautions to ensure that AI systems do not unduly infringe the value of humanity, based on the International Human Rights Law and the International Humanitarian Law.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017