· 3) Science Policy Link

There should be constructive and healthy exchange between AI researchers and policy makers.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

· Policy making

AI development policies and ethical norms to protect children's rights and interests should be studied and formulated. Research on the potential impact of AI on children should be strengthened, forward looking codes of conduct, laws and regulations, and technical specifications should be formulated, and long term follow up studies and periodic assessment mechanisms should be established. The healthy development of AI in the direction of protecting and promoting children's rights and interests should be encouraged and supported.

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development. in Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

· 4) Research Culture

A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

3. Principle 3 — Accountability

Issue: How can we assure that designers, manufacturers, owners, and operators of A IS are responsible and accountable? [Candidate Recommendations] To best address issues of responsibility and accountability: 1. Legislatures courts should clarify issues of responsibility, culpability, liability, and accountability for A IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations). 2. Designers and developers of A IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A IS. 3. Multi stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A IS oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.). 4. Systems for registration and record keeping should be created so that it is always possible to find out who is legally responsible for a particular A IS. Manufacturers operators owners of A IS should register key, high level parameters, including: • Intended use • Training data training environment (if applicable) • Sensors real world data sources • Algorithms • Process graphs • Model features (at various levels) • User interfaces • Actuators outputs • Optimization goal loss function reward function

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

4 SOLIDARITY PRINCIPLE

The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations. 1) AIS must not threaten the preservation of fulfilling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation. 2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans. 3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships. 4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff. 5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior. 6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

· We will give AI systems human values and make them beneficial to society

1. Government will support theresearch of the beneficial use of AI 2. AI should be developed to align with human values and contribute to human flourishing 3. Stakeholders throughout society should be involved in the development of AI and its governance

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019