1) Research Goal

The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

Long term Planning

Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.

Published by Beijing Academy of Artificial Intelligence (BAAI); Peking University; Tsinghua University; Institute of Automation, Chinese Academy of Sciences; Institute of Computing Technology, Chinese Academy of Sciences; Artifical Intelligence Industry Innovation Strategy Alliance (AITISA); etc. in Beijing AI Principles, May 25, 2019

(c) Responsibility

The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals. Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

There is a significant risk that well intended AI research will be misused in ways which harm people. AI researchers and developers must consider the ethical implications of their work. The Cabinet Office's final Cyber Security & Technology Strategy must explicitly consider the risks of AI with respect to cyber security, and the Government should conduct further research as how to protect data sets from any attempts at data sabotage. The Government and Ofcom must commission research into the possible impact of AI on conventional and social media outlets, and investigate measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

1. The purpose of AI is to augment human intelligence

The purpose of AI and cognitive systems developed and applied by IBM is to augment – not replace – human intelligence. Our technology is and will be designed to enhance and extend human capability and potential. At IBM, we believe AI should make ALL of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few. To that end, we are investing in initiatives to help the global workforce gain the skills needed to work in partnership with these technologies.

Published by IBM in Principles for Trust and Transparency, May 30, 2018

5. Knowledge

[QUESTIONS] Does the development of AI put critical thinking at risk? How do we minimize the dissemination of fake news or misleading information? Should research results on AI, whether positive or negative, be made available and accessible? Is it acceptable not to be informed that medical or legal advice has been given by a chatbot? How transparent should the internal decision making processes of algorithms be? [PRINCIPLES] ​The development of AI should promote critical thinking and protect us from propaganda and manipulation.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017