The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals.
Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.
2) Research Funding
Published by: Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles
Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
How can we grow our prosperity through automation while maintaining people’s resources and purpose?
How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
What set of values should AI be aligned with, and what legal and ethical status should it have?
1. Principle 1 — Human Rights
Issue: How can we ensure that A IS do not infringe upon human rights?
To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A IS should not be granted rights and privileges equal to human rights: A IS should always be subordinate to human judgment and control.
How can AI contribute to greater autonomy for human beings?
Must we fight against the phenomenon of attention seeking which has accompanied advances in AI?
Should we be worried that humans prefer the company of AI to that of other humans or animals?
Can someone give informed consent when faced with increasingly complex autonomous technologies?
Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision?
The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
4 SOLIDARITY PRINCIPLE
The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations.
1) AIS must not threaten the preservation of fulﬁlling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation.
2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans.
3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships.
4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff.
5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior.
6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.