The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals.
Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.
4. The Principle of Justice: “Be Fair”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.
3. Principle 3 — Accountability
Issue: How can we assure that designers, manufacturers, owners, and operators of A IS are responsible and accountable?
To best address issues of responsibility and accountability:
1. Legislatures courts should clarify issues of responsibility, culpability, liability, and accountability for A IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A IS.
3. Multi stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A IS oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record keeping should be created so that it is always possible to find out who is legally responsible for a particular A IS. Manufacturers operators owners of A IS should register key, high level parameters, including:
• Intended use
• Training data training environment (if applicable)
• Sensors real world data sources
• Process graphs
• Model features (at various levels)
• User interfaces
• Actuators outputs
• Optimization goal loss function reward function
• Require Accountability for Ethical Design and Implementation
The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms.
• Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles.
• Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.
How should AI research and its applications, at the institutional level, be controlled?
In what areas would this be most pertinent?
Who should decide, and according to which modalities, the norms and moral values determining this control?
Who should establish ethical guidelines for self driving cars?
Should ethical labeling that respects certain standards be developed for AI, websites and businesses?
The development of AI should promote informed participation in public life, cooperation and democratic debate.