16) Human Control
Publisher: Future of Life Institute (FLI), Beneficial AI 2017
Humans should choose how and whether to delegate decisions to AI systems, to accomplish human chosen objectives.
The principle of autonomy implies the freedom of the human being. This translates into human responsibility and thus control over and knowledge about ‘autonomous’ systems as they must not impair freedom of human beings to set their own standards and norms and be able to live according to them. All ‘autonomous’ technologies must, hence, honour the human ability to choose whether, when and how to delegate decisions and actions to them. This also involves the transparency and predictability of ‘autonomous’ systems, without which users would not be able to intervene or terminate them if they would consider this morally required.
The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals.
Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.
(9) Responsibility for Human
AI need to keep human safe, on the basis that this safety consideration do not directly and indirectly harm human society. AI need to help Human for transformation to future human becoming.
How can AI contribute to greater autonomy for human beings?
Must we fight against the phenomenon of attention seeking which has accompanied advances in AI?
Should we be worried that humans prefer the company of AI to that of other humans or animals?
Can someone give informed consent when faced with increasingly complex autonomous technologies?
Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision?
The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
How do we ensure that the benefits of AI are available to everyone?
Must we fight against the concentration of power and wealth in the hands of a small number of AI companies?
What types of discrimination could AI create or exacerbate?
Should the development of AI be neutral or should it seek to reduce social and economic inequalities?
What types of legal decisions can we delegate to AI?
The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental physical abilities, sexual orientation, ethnic social origins and religious beliefs.