Focus on humans

Human control of AI should be mandatory and testable by regulators. AI should be developed with a focus on the human consequences as well as the economic benefits. A human impact review should be part of the AI development process, and a workplace plan for managing disruption and transitions should be part of the deployment process. Ongoing training in the workplace should be reinforced to help workers adapt. Governments should plan for transition support as jobs disappear or are significantly changed.
Principle: Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

Published by Centre for International Governance Innovation (CIGI), Canada

Related Principles

(1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

(4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

VI. Societal and environmental well being

For AI to be trustworthy, its impact on the environment and other sentient beings should be taken into account. Ideally, all humans, including future generations, should benefit from biodiversity and a habitable environment. Sustainability and ecological responsibility of AI systems should hence be encouraged. The same applies to AI solutions addressing areas of global concern, such as for instance the UN Sustainable Development Goals. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole. The use of AI systems should be given careful consideration particularly in situations relating to the democratic process, including opinion formation, political decision making or electoral contexts. Moreover, AI’s social impact should be considered. While AI systems can be used to enhance social skills, they can equally contribute to their deterioration.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

1. Artificial intelligence should be developed for the common good and benefit of humanity.

The UK must seek to actively shape AI's development and utilisation, or risk passively acquiescing to its many likely consequences. A shared ethical AI framework is needed to give clarity as to how AI can best be used to benefit individuals and society. By establishing these principles, the UK can lead by example in the international community. We recommend that the Government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

3.3 Workforce

There is concern that AI will result in job change, job loss, and or worker displacement. While these concerns are understandable, it should be noted that most emerging AI technologies are designed to perform a specific task and assist rather than replace human employees. This type of augmented intelligence means that a portion, but most likely not all, of an employee’s job could be replaced or made easier by AI. While the full impact of AI on jobs is not yet fully known, in terms of both jobs created and displaced, an ability to adapt to rapid technological change is critical. We should leverage traditional human centered resources as well as new career educational models and newly developed AI technologies to assist both the existing workforce and future workforce in successfully navigating career development and job transitions. Additionally, we must have PPPs that significantly improve the delivery and effectiveness of lifelong career education and learning, inclusive of workforce adjustment programs. We must also prioritize the availability of job driven training to meet the scale of need, targeting resources to programs that produce strong results.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017