· Energy conservation

The development and use of AI should reduce its own energy consumption as much as possible while meeting specific needs. AI applications with high energy consumption should follow the necessary principles and alternative solutions with relatively low energy consumption should be actively explored. For example, simplifying the AI model within the acceptable range of loss, optimizing the model training method, and co design of software and hardware to achieve the expected level of intelligence with low energy consumption. The potential approach includes but is not limited to smart and low energy consumption. Considering the type and application characteristics of electric power comprehensively, and planning for rational use, where appropriate, green power are encouraged to be used, and efforts on reducing the cost of power storage should be continuously made. It is encouraged to train AI models and algorithms and deploy AI applications in areas where electric power is relatively cheap, and adopt solutions that consume less energy for server cooling as much as possible.
Principle: Principles on AI for Climate Action, April 26, 2022

Published by International Research Center for AI Ethics and Governance, Instituteof Automation, Chinese Academy of Sciences and other 10 entities

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

3. Principle of controllability

Developers should pay attention to the controllability of AI systems. [Comment] In order to assess the risks related to the controllability of AI systems, it is encouraged that developers make efforts to conduct verification and validation in advance. One of the conceivable methods of risk assessment is to conduct experiments in a closed space such as in a laboratory or a sandbox in which security is ensured, at a stage before the practical application in society. In addition, in order to ensure the controllability of AI systems, it is encouraged that developers pay attention to whether the supervision (such as monitoring or warnings) and countermeasures (such as system shutdown, cut off from networks, or repairs) by humans or other trustworthy AI systems are effective, to the extent possible in light of the characteristics of the technologies to be adopted. [Note] Verification and validation are methods for evaluating and controlling risks in advance. Generally, the former is used for confirming formal consistency, while the latter is used for confirming substantial validity. (See, e.g., The Future of Life Institute (FLI), Research Priorities for Robust and Beneficial Artificial Intelligence (2015)). [Note] Examples of what to see in the risk assessment are risks of reward hacking in which AI systems formally achieve the goals assigned but substantially do not meet the developer's intents, and risks that AI systems work in ways that the developers have not intended due to the changes of their outputs and programs in the process of the utilization with their learning, etc. For reward hacking, see, e.g., Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman & Dan Mané, Concrete Problems in AI Safety, arXiv: 1606.06565 [cs.AI] (2016).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

· Build and Validate:

1 The models and algorithms must have, as their ultimate goal, a result linked to a socially recognized end, with the ability to demonstrate how the expected results relate to that social or environmental purpose through transformative and impactful benefits where applicable. 2 It is best practice to measure and maintain acceptable levels of resource usage and energy consumption during this phase setting the tone that AI systems not only strive to foster AI solutions that address global concerns relating to social and environmental issues but also practice sustainable and ecological responsibilities.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

5. Benefits and Costs

When developing regulatory and non regulatory approaches, agencies will often consider the application and deployment of AI into already regulated industries. Presumably, such significant investments would not occur unless they offered significant economic potential. As in all technological transitions of this nature, the introduction of AI may also create unique challenges. For example, while the broader legal environment already applies to AI applications, the application of existing law to questions of responsibility and liability for decisions made by AI could be unclear in some instances, leading to the need for agencies, consistent with their authorities, to evaluate the benefits, costs, and distributional effects associated with any identified or expected method for accountability. Executive Order 12866 calls on agencies to “select those approaches that maximize net benefits (including potential economic, environmental, public health and safety, and other advantages; distributive impacts; and equity).” Agencies should, when consistent with law, carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace, whether implementing AI will change the type of errors created by the system, as well as comparison to the degree of risk tolerated in other existing ones. Agencies should also consider critical dependencies when evaluating AI costs and benefits, as technological factors (such as data quality) and changes in human processes associated with AI implementation may alter the nature and magnitude of the risks and benefits. In cases where a comparison to a current system or process is not available, evaluation of risks and costs of not implementing the system should be evaluated as well.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020

6 Promote artificial intelligence that is responsive and sustainable

Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used. Thus, identification of a health need requires that institutions and governments respond to that need and its context with appropriate technologies with the aim of achieving the public interest in health protection and promotion. When an AI technology is ineffective or engenders dissatisfaction, the duty to be responsive requires an institutional process to resolve the problem, which may include terminating use of the technology. Responsiveness also requires that AI technologies be consistent with wider efforts to promote health systems and environmental and workplace sustainability. AI technologies should be introduced only if they can be fully integrated and sustained in the health care system. Too often, especially in under resourced health systems, new technologies are not used or are not repaired or updated, thereby wasting scare resources that could have been invested in proven interventions. Furthermore, AI systems should be designed to minimize their ecological footprints and increase energy efficiency, so that use of AI is consistent with society’s efforts to reduce the impact of human beings on the earth’s environment, ecosystems and climate. Sustainability also requires governments and companies to address anticipated disruptions to the workplace, including training of health care workers to adapt to use of AI and potential job losses due to the use of automated systems for routine health care functions and administrative tasks.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021