(h) Responsibility:

Developers and companies should take into consideration ethics when developing autonomous intelligent system.
Principle: Suggested generic principles for the development, implementation and use of AI, Mar 21, 2019

Published by The Extended Working Group on Ethics of Artificial Intelligence (AI) of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), UNESCO

Related Principles

(c) Responsibility

The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals. Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

3. Principle 3 — Accountability

Issue: How can we assure that designers, manufacturers, owners, and operators of A IS are responsible and accountable? [Candidate Recommendations] To best address issues of responsibility and accountability: 1. Legislatures courts should clarify issues of responsibility, culpability, liability, and accountability for A IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations). 2. Designers and developers of A IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A IS. 3. Multi stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A IS oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.). 4. Systems for registration and record keeping should be created so that it is always possible to find out who is legally responsible for a particular A IS. Manufacturers operators owners of A IS should register key, high level parameters, including: • Intended use • Training data training environment (if applicable) • Sensors real world data sources • Algorithms • Process graphs • Model features (at various levels) • User interfaces • Actuators outputs • Optimization goal loss function reward function

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017


IEEE endorses the principle that the design, development and implementation of autonomous and intelligent systems (A IS) should be undertaken with consideration for the societal consequences and safe operation of systems with respect to:

Published by IEEE in Ethical Aspects of Autonomous and Intelligent Systems, Jun 24, 2019

· 1.1 Responsible Design and Deployment

We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

Ethical Considerations in Deployment and Design

Principle: AI system designers and builders need to apply a user centric approach to the technology. They need to consider their collective responsibility in building AI systems that will not pose security risks to the Internet and Internet users. Recommendations: Adopt ethical standards: Adherence to the principles and standards of ethical considerations in the design of artificial intelligence, should guide researchers and industry going forward. Promote ethical considerations in innovation policies: Innovation policies should require adherence to ethical standards as a pre requisite for things like funding.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017