· 2.3 Shaping an enabling policy environment for AI

a) Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled up, as appropriate. b) Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
Principle: OECD Principles on Artificial Intelligence, May 22, 2019

Published by The Organisation for Economic Co-operation and Development (OECD)

Related Principles

· 2.3 Shaping an enabling policy environment for AI

a) Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled up, as appropriate. b) Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

• Foster Innovation and Open Development

To better understand the impact of AI and explore the broad diversity of AI implementations, public policy should encourage investment in AI R&D. Governments should support the controlled testing of AI systems to help industry, academia, and other stakeholders improve the technology. [Recommendations] • Fuel AI innovation: Public policy should promote investment, make available funds for R&D, and address barriers to AI development and adoption. • Address global societal challenges: AI powered flagship initiatives should be funded to find solutions to the world’s greatest challenges such as curing cancer, ensuring food security, controlling climate change, and achieving inclusive economic growth. • Allow for experimentation: Governments should create the conditions necessary for the controlled testing and experimentation of AI in the real world, such as designating self driving test sites in cities. • Prepare a workforce for AI: Governments should create incentives for students to pursue courses of study that will allow them to create the next generation of AI. • Lead by example: Governments should lead the way on demonstrating the applications of AI in its interactions with citizens and invest sufficiently in infrastructure to support and deliver AI based services. • Partnering for AI: Governments should partner with industry, academia, and other stakeholders for the promotion of AI and debate ways to maximize its benefits for the economy.

Published by Intel in AI public policy principles, Oct 18, 2017

6. Flexibility

When developing regulatory and non regulatory approaches, agencies should pursue performance based and flexible approaches that can adapt to rapid changes and updates to AI applications. Rigid, design based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence. Targeted agency conformity assessment schemes, to protect health and safety, privacy, and other values, will be essential to a successful, and flexible, performance based approach. To advance American innovation, agencies should keep in mind international uses of AI, ensuring that American companies are not disadvantaged by the United States’ regulatory regime.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020

6. Flexibility

When developing regulatory and non regulatory approaches, agencies should pursue performance based and flexible approaches that can adapt to rapid changes and updates to AI applications. Rigid, design based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence. Targeted agency conformity assessment schemes, to protect health and safety, privacy, and other values, will be essential to a successful, and flexible, performance based approach. To advance American innovation, agencies should keep in mind international uses of AI, ensuring that American companies are not disadvantaged by the United States’ regulatory regime.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020

6 Promote artificial intelligence that is responsive and sustainable

Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used. Thus, identification of a health need requires that institutions and governments respond to that need and its context with appropriate technologies with the aim of achieving the public interest in health protection and promotion. When an AI technology is ineffective or engenders dissatisfaction, the duty to be responsive requires an institutional process to resolve the problem, which may include terminating use of the technology. Responsiveness also requires that AI technologies be consistent with wider efforts to promote health systems and environmental and workplace sustainability. AI technologies should be introduced only if they can be fully integrated and sustained in the health care system. Too often, especially in under resourced health systems, new technologies are not used or are not repaired or updated, thereby wasting scare resources that could have been invested in proven interventions. Furthermore, AI systems should be designed to minimize their ecological footprints and increase energy efficiency, so that use of AI is consistent with society’s efforts to reduce the impact of human beings on the earth’s environment, ecosystems and climate. Sustainability also requires governments and companies to address anticipated disruptions to the workplace, including training of health care workers to adapt to use of AI and potential job losses due to the use of automated systems for routine health care functions and administrative tasks.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021