(Conclusion)

Taking into consideration the principles above, the 40th International Conference of Data Protection and Privacy Commissioners calls for common governance principles on artificial intelligence to be established, fostering concerted international efforts in this field, in order to ensure that its development and use take place in accordance with ethics and human values, and respect human dignity. These common governance principles must be able to tackle the challenges raised by the rapid evolutions of artificial intelligence technologies, on the basis of a multi stakeholder approach in order to address all cross sectoral issues at stake. They must take place at an international level since the development of artificial intelligence is a trans border phenomenon and may affect all humanity. The Conference should be involved in this international effort, working with and supporting general and sectoral authorities in other fields such as competition, market and consumer regulation.
Principle: Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

Related Principles

1. Artificial intelligence should be developed for the common good and benefit of humanity.

The UK must seek to actively shape AI's development and utilisation, or risk passively acquiescing to its many likely consequences. A shared ethical AI framework is needed to give clarity as to how AI can best be used to benefit individuals and society. By establishing these principles, the UK can lead by example in the international community. We recommend that the Government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

7. Principle of ethics

Developers should respect human dignity and individual autonomy in R&D of AI systems. [Comment] It is encouraged that, when developing AI systems that link with the human brain and body, developers pay particularly due consideration to respecting human dignity and individual autonomy, in light of discussions on bioethics, etc. It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems. It is advisable that developers take precautions to ensure that AI systems do not unduly infringe the value of humanity, based on the International Human Rights Law and the International Humanitarian Law.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

PREAMBLE

For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks of which natural intelligence alone was thought capable: processing large quantities of information, calculating and predicting, learning and adapting responses to changing situations, and recognizing and classifying objects. Given the immaterial nature of these tasks, and by analogy with human intelligence, we designate these wide ranging systems under the general name of artificial intelligence. Artificial intelligence constitutes a major form of scientific and technological progress, which can generate considerable social benefits by improving living conditions and health, facilitating justice, creating wealth, bolstering public safety, and mitigating the impact of human activities on the environment and the climate. Intelligent machines are not limited to performing better calculations than human beings; they can also interact with sentient beings, keep them company and take care of them. However, the development of artificial intelligence does pose major ethical challenges and social risks. Indeed, intelligent machines can restrict the choices of individuals and groups, lower living standards, disrupt the organization of labor and the job market, influence politics, clash with fundamental rights, exacerbate social and economic inequalities, and affect ecosystems, the climate and the environment. Although scientific progress, and living in a society, always carry a risk, it is up to the citizens to determine the moral and political ends that give meaning to the risks encountered in an uncertain world. The lower the risks of its deployment, the greater the benefits of artificial intelligence will be. The first danger of artificial intelligence development consists in giving the illusion that we can master the future through calculations. Reducing society to a series of numbers and ruling it through algorithmic procedures is an old pipe dream that still drives human ambitions. But when it comes to human affairs, tomorrow rarely resembles today, and numbers cannot determine what has moral value, nor what is socially desirable. The principles of the current declaration are like points on a moral compass that will help guide the development of artificial intelligence towards morally and socially desirable ends. They also offer an ethical framework that promotes internationally recognized human rights in the fields affected by the rollout of artificial intelligence. Taken as a whole, the principles articulated lay the foundation for cultivating social trust towards artificially intelligent systems. The principles of the current declaration rest on the common belief that human beings seek to grow as social beings endowed with sensations, thoughts and feelings, and strive to fulfill their potential by freely exercising their emotional, moral and intellectual capacities. It is incumbent on the various public and private stakeholders and policymakers at the local, national and international level to ensure that the development and deployment of artificial intelligence are compatible with the protection of fundamental human capacities and goals, and contribute toward their fuller realization. With this goal in mind, one must interpret the proposed principles in a coherent manner, while taking into account the specific social, cultural, political and legal contexts of their application.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

(Preamble)

...To achieve these objectives, we must set out from the very beginning of each algorithm’s development with an “algor ethical” vision, i.e. an approach of ethics by design. Designing and planning AI systems that we can trust involves seeking a consensus among political decision makers, UN system agencies and other intergovernmental organisations, researchers, the world of academia and representatives of non governmental organizations regarding the ethical principles that should be built into these technologies. For this reason, the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote “algor ethics”, namely the ethical use of AI as defined by the following principles:

Published by The Pontifical Academy for Life, Microsoft, IBM, FAO, the Italia Government in Rome Call for AI Ethics, Feb 28, 2020

8. Open cooperation

The development of artificial intelligence requires the concerted efforts of all countries and all parties, and we should actively establish norms and standards for the safe development of artificial intelligence at the international level, so as to avoid the security risks caused by incompatibility between technology and policies.

Published by Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security in Shanghai Initiative for the Safe Development of Artificial Intelligence, Aug 30, 2019