7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights

As AI systems develop and augmented realities are formed, workers and work tasks will be displaced. To ensure a just transition, as well as sustainable future developments, it is vital that corporate policies are put in place that ensure corporate accountability in relation to this displacement, such as retraining programmes and job change possibilities. Governmental measures to help displaced workers retrain and find new employment are additionally required. AI systems coupled with the wider transition to the digital economy will require that workers on all levels and in all occupations have access to social security and to continuous lifelong learning to remain employable. It is the responsibility of states and companies to find solutions that provide all workers, in all forms of work, the right to and access to both. In addition, in a world where the casualisation or individualisation of work is rising, all workers in all forms of work must have the same, strong social and fundamental rights. All AI systems must include a check and balance on whether its deployment and augmentation go hand in hand with workers’ rights as laid out in human right laws, ILO conventions and collective agreements. An algorithm “8798” reflecting the core ILO conventions 87 and 98 that is built into the system could serve that very purpose. Upon failure, the system must be shut down.
Principle: Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017

Published by UNI Global Union

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2021

· 2.4. Building human capacity and preparing for labor market transformation

a) Governments should work closely with stakeholders to prepare for the transformation of the world of work and of society. They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills. b) Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programs along the working life, support for those affected by displacement, and access to new opportunities in the labor market. c) Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

We welcome the measures to increase the number of computer science teachers in secondary schools and we urge the Government to ensure that there is support for teachers with associated skills and subjects such as mathematics to retrain. At earlier stages of education, children need to be adequately prepared for working with, and using, AI. For all children, the basic knowledge and understanding necessary to navigate an AI driven world will be essential. AI will have significant implications for the ways in which society lives and works. AI may accelerate the digital disruption in the jobs market. Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. A significant Government investment in skills and training is needed if this disruption is to be navigated successfully and to the benefit of the working population and national productivity growth.

Published by House of Lords of United Kingdom, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

· 2.4. Building human capacity and preparing for labor market transformation

a) Governments should work closely with stakeholders to prepare for the transformation of the world of work and of society. They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills. b) Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programs along the working life, support for those affected by displacement, and access to new opportunities in the labor market. c) Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.

Published by The Organisation for Economic Co-operation and Development (OECD) in OECD Principles on Artificial Intelligence, May 22, 2019

4. Adopt a Human In Command Approach

An absolute precondition is that the development of AI must be responsible, safe and useful, where machines maintain the legal status of tools, and legal persons retain control over, and responsibility for, these machines at all times. This entails that AI systems should be designed and operated to comply with existing law, including privacy. Workers should have the right to access, manage and control the data AI systems generate, given said systems’ power to analyse and utilize that data (See principle 1 in “Top 10 principles for workers’ data privacy and protection”). Workers must also have the ‘right of explanation’ when AI systems are used in human resource procedures, such as recruitment, promotion or dismissal.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017