Understanding the social and ethical implications of a field as complex as AI will only be possible through world class scientific research and the inclusion of many voices. DeepMind Ethics & Society is governed by five Principles that seek to guarantee the rigour, transparency and social accountability of its work.
Key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. A spirit of global cooperation and public dialogue on the issue will ensure that they are taken in an inclusive, informed, and farsighted manner. The right to receive education or access information on new technologies and their ethical implications will facilitate that everyone understands risks and opportunities and is empowered to participate in decisional processes that crucially shape our future.
The principles of human dignity and autonomy centrally involve the human right to self determination through the means of democracy. Of key importance to our democratic political systems are value pluralism, diversity and accommodation of a variety of conceptions of the good life of citizens. They must not be jeopardised, subverted or equalised by new technologies that inhibit or influence political decision making and infringe on the freedom of expression and the right to receive and impart information without interference. Digital technologies should rather be used to harness collective intelligence and support and improve the civic processes on which our democratic societies depend.
1.1 Responsible Design and Deployment
Published by: Information Technology Industry Council (ITI) in AI Policy Principles
We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.
Artificial Intelligence (“AI”) research focuses on the realization of AI, which is the enabling of computers to possess intelligence and become capable of learning and acting autonomously. AI will assume a significant role in the future of mankind in a wide range of areas, such as Industry, Medicine, Education, Culture, Economics, Politics, Government, etc. However, it is undeniable that AI technologies can become detrimental to human society or conflict with public interests due to abuse or misuse.
To ensure that AI research and development remains beneficial to human society, AI researchers, as highly specialized professionals, must act ethically and in accordance with their own conscience and acumen. AI researchers must listen attentively to the diverse views of society and learn from it with humility. As technology advances and society develops, AI researchers should consistently strive to develop and deepen their sense of ethics and morality independently.
The Japanese Society for Artificial Intelligence (JSAI) hereby formalizes the Ethical Guidelines to be applied by its members. These Ethical Guidelines shall serve as a moral foundation for JSAI members to become better aware of their social responsibilities and encourage effective communications with society. JSAI members shall undertake and comply with these guidelines.
7. We engage with the wider societal challenges of AI
While we have control, to a large extent, over the preceding areas, there are numerous emerging challenges that require a much broader discourse across industries, disciplines, borders, and cultural, philosophical, and religious traditions. These include, but are not limited to, questions concerning:
Economic impact, such as how industry and society can collaborate to prepare students and workers for an AI economy and how society may need to adapt means of economic redistribution, social safety, and economic development.
Social impact, such as the value and meaning of work for people and the potential role of AI software as social companions and caretakers.
Normative questions around how AI should confront ethical dilemmas and what applications of AI, specifically with regards to security and safety, should be considered permissible.
We look forward to making SAP one of many active voices in these debates by engaging with our AI Ethics Advisory Panel and a wide range of partnerships and initiatives.