5. provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI based public services better

Principle: Responsible use of artificial intelligence (AI): Our guiding principles, 2019 (unconfirmed)

Published by Government of Canada

Related Principles

· (7) Innovation

To realize Society 5.0 and continuous innovation in which people evolve along with AI, it is necessary to account for national, industry academia, and public private borders, race, sex, nationality, age, political and religious beliefs, etc. Beyond these boundaries, through a Global perspective we must promote diversification and cooperation between industry academia public private sectors, through the development of human capabilities and technology. To encourage mutual collaboration and partnership between universities, research institutions and private sectors, and the flexible movement of talent. To implement AI efficiently and securely in society, methods for confirming the quality and reliability of AI and for efficient collection and maintenance of data utilized in AI must be promoted. Additionally, the establishment of AI engineering should also be promoted. This engineering includes methods for the development, testing and operation of AI. To ensure the sound development of AI technology, it is necessary to establish an accessible platform in which data from all fields can be mutually utilized across borders with no monopolies, while ensuring privacy and security. In addition, research and development environments should be created in which computer resources and highspeed networks are shared and utilized, to promote international collaboration and accelerate AI research. To promote implementation of AI technology, governments must promote regulatory reform to reduce impeding factors in AI related fields.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

• Require Accountability for Ethical Design and Implementation

The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. [Recommendations] • Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles. • Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.

Published by Intel in AI public policy principles, Oct 18, 2017

2. Ensure inclusion of and for children

Strive for diversity amongst those who design, develop, collect and process data, implement, research, regulate and oversee AI systems. Adopt an inclusive design approach when developing AI products that will be used by children or impact them. Support meaningful child participation, both in AI policies and in the design and development processes.

Published by United Nations Children's Fund (UNICEF) and the Ministry of in Requirements for child-centred AI, Sep 16, 2020

6 Promote artificial intelligence that is responsive and sustainable

Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used. Thus, identification of a health need requires that institutions and governments respond to that need and its context with appropriate technologies with the aim of achieving the public interest in health protection and promotion. When an AI technology is ineffective or engenders dissatisfaction, the duty to be responsive requires an institutional process to resolve the problem, which may include terminating use of the technology. Responsiveness also requires that AI technologies be consistent with wider efforts to promote health systems and environmental and workplace sustainability. AI technologies should be introduced only if they can be fully integrated and sustained in the health care system. Too often, especially in under resourced health systems, new technologies are not used or are not repaired or updated, thereby wasting scare resources that could have been invested in proven interventions. Furthermore, AI systems should be designed to minimize their ecological footprints and increase energy efficiency, so that use of AI is consistent with society’s efforts to reduce the impact of human beings on the earth’s environment, ecosystems and climate. Sustainability also requires governments and companies to address anticipated disruptions to the workplace, including training of health care workers to adapt to use of AI and potential job losses due to the use of automated systems for routine health care functions and administrative tasks.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021