· Focus on humans

Human control of AI should be mandatory and testable by regulators. AI should be developed with a focus on the human consequences as well as the economic benefits. A human impact review should be part of the AI development process, and a workplace plan for managing disruption and transitions should be part of the deployment process. Ongoing training in the workplace should be reinforced to help workers adapt. Governments should plan for transition support as jobs disappear or are significantly changed.
Principle: Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

Published by Centre for International Governance Innovation (CIGI), Canada

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

· 3.3 Workforce

There is concern that AI will result in job change, job loss, and or worker displacement. While these concerns are understandable, it should be noted that most emerging AI technologies are designed to perform a specific task and assist rather than replace human employees. This type of augmented intelligence means that a portion, but most likely not all, of an employee’s job could be replaced or made easier by AI. While the full impact of AI on jobs is not yet fully known, in terms of both jobs created and displaced, an ability to adapt to rapid technological change is critical. We should leverage traditional human centered resources as well as new career educational models and newly developed AI technologies to assist both the existing workforce and future workforce in successfully navigating career development and job transitions. Additionally, we must have PPPs that significantly improve the delivery and effectiveness of lifelong career education and learning, inclusive of workforce adjustment programs. We must also prioritize the availability of job driven training to meet the scale of need, targeting resources to programs that produce strong results.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

First principle: Human Centricity

The impact of AI enabled systems on humans must be assessed and considered, for a full range of effects both positive and negative across the entire system lifecycle. Whether they are MOD personnel, civilians, or targets of military action, humans interacting with or affected by AI enabled systems for Defence must be treated with respect. This means assessing and carefully considering the effects on humans of AI enabled systems, taking full account of human diversity, and ensuring those effects are as positive as possible. These effects should prioritise human life and wellbeing, as well as wider concerns for human kind such as environmental impacts, while taking account of military necessity. This applies across all uses of AI enabled systems, from the back office to the battlefield. The choice to develop and deploy AI systems is an ethical one, which must be taken with human implications in mind. It may be unethical to use certain systems where negative human impacts outweigh the benefits. Conversely, there may be a strong ethical case for the development and use of an AI system where it would be demonstrably beneficial or result in a more ethical outcome.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022

4 Foster responsibility and accountability

Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance; this helps to ensure that health care providers can use an AI technology responsibly. Although AI technologies perform specific tasks, it is the responsibility of human stakeholders to ensure that they can perform those tasks and that they are used under appropriate conditions. Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. In human warranty, regulatory principles are applied upstream and downstream of the algorithm by establishing points of human supervision. The critical points of supervision are identified by discussions among professionals, patients and designers. The goal is to ensure that the algorithm remains on a machine learning development path that is medically effective, can be interrogated and is ethically responsible; it involves active partnership with patients and the public, such as meaningful public consultation and debate (101). Ultimately, such work should be validated by regulatory agencies or other supervisory authorities. When something does go wrong in application of an AI technology, there should be accountability. Appropriate mechanisms should be adopted to ensure questioning by and redress for individuals and groups adversely affected by algorithmically informed decisions. This should include access to prompt, effective remedies and redress from governments and companies that deploy AI technologies for health care. Redress should include compensation, rehabilitation, restitution, sanctions where necessary and a guarantee of non repetition. The use of AI technologies in medicine requires attribution of responsibility within complex systems in which responsibility is distributed among numerous agents. When medical decisions by AI technologies harm individuals, responsibility and accountability processes should clearly identify the relative roles of manufacturers and clinical users in the harm. This is an evolving challenge and remains unsettled in the laws of most countries. Institutions have not only legal liability but also a duty to assume responsibility for decisions made by the algorithms they use, even if it is not feasible to explain in detail how the algorithms produce their results. To avoid diffusion of responsibility, in which “everybody’s problem becomes nobody’s responsibility”, a faultless responsibility model (“collective responsibility”), in which all the agents involved in the development and deployment of an AI technology are held responsible, can encourage all actors to act with integrity and minimize harm. In such a model, the actual intentions of each agent (or actor) or their ability to control an outcome are not considered.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021