· 4. AI TECHNOLOGIES SHOULD BE APPLIED AND IMPLEMENTED WHERE IT WILL BENEFIT PEOPLE

4.1. Application of AIS in accordance with its intended purpose. AI Actors must use AIS in accordance with the stated purpose, in the prescribed subject area and for solving the prescribed problems. 4.2. Stimulating the development of AI. AI Actors should encourage and incentivize the design, implementation, and development of safe and ethical AI technologies, taking into account national priorities.
Principle: Artificial Intelligence Code of Ethics, Oct 26, 2021

Published by AI Alliance Russia

Related Principles

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

· (6) Fairness, Accountability, and Transparency

Under the "AI Ready society", when using AI, fair and transparent decision making and accountability for the results should be appropriately ensured, and trust in technology should be secured, in order that people using AI will not be discriminated on the ground of the person's background or treated unjustly in light of human dignity. Under the AI design concept, all people must be treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, sex, nationality, age, political beliefs, religion, etc. Appropriate explanations should be provided such as the fact that AI is being used, the method of obtaining and using the data used in AI, and the mechanism to ensure the appropriateness of the operation results of AI according to the situation AI is used. In order for people to understand and judge AI proposals, there should be appropriate opportunities for open dialogue on the use, adoption and operation of AI, as needed. In order to ensure the above viewpoints and to utilize AI safely in society, a mechanism must be established to secure trust in AI and its using data.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

• Foster Innovation and Open Development

To better understand the impact of AI and explore the broad diversity of AI implementations, public policy should encourage investment in AI R&D. Governments should support the controlled testing of AI systems to help industry, academia, and other stakeholders improve the technology. [Recommendations] • Fuel AI innovation: Public policy should promote investment, make available funds for R&D, and address barriers to AI development and adoption. • Address global societal challenges: AI powered flagship initiatives should be funded to find solutions to the world’s greatest challenges such as curing cancer, ensuring food security, controlling climate change, and achieving inclusive economic growth. • Allow for experimentation: Governments should create the conditions necessary for the controlled testing and experimentation of AI in the real world, such as designating self driving test sites in cities. • Prepare a workforce for AI: Governments should create incentives for students to pursue courses of study that will allow them to create the next generation of AI. • Lead by example: Governments should lead the way on demonstrating the applications of AI in its interactions with citizens and invest sufficiently in infrastructure to support and deliver AI based services. • Partnering for AI: Governments should partner with industry, academia, and other stakeholders for the promotion of AI and debate ways to maximize its benefits for the economy.

Published by Intel in AI public policy principles, Oct 18, 2017

8. Agile Governance

The governance of AI should respect the underlying principles of AI development. In promoting the innovative and healthy development of AI, high vigilance should be maintained in order to detect and resolve possible problems in a timely manner. The governance of AI should be adaptive and inclusive, constantly upgrading the intelligence level of the technologies, optimizing management mechanisms, and engaging with muti stakeholders to improve the governance institutions. The governance principles should be promoted throughout the entire lifecycle of AI products and services. Continuous research and foresight for the potential risks of higher level of AI in the future are required to ensure that AI will always be beneficial for human society.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

· 5. INTERESTS OF DEVELOPING AI TECHNOLOGIES ABOVE THE INTERESTS OF COMPETITION

5.1. Correctness of AIS comparisons. To maintain the fair competition and effective cooperation of developers, AI Actors should use the most reliable and comparable information about the capabilities of AISs in relation to a task and ensure the uniformity of the measurement methodologies. 5.2. Development of competencies. AI Actors are encouraged to follow practices adopted by the professional community, to maintain the proper level of professional competence necessary for safe and effective work with AIS and to promote the improvement of the professional competence of workers in the field of AI, including within the framework of programs and educational disciplines on AI ethics. 5.3. Collaboration of developers. AI Actors are encouraged to develop cooperation within the AI Actor community, particularly between developers, including by informing each other of the identification of critical vulnerabilities in order to prevent their wide distribution. They should also make efforts to improve the quality and availability of resources in the field of AIS development, including by increasing the availability of data (including labeled data), ensuring the compatibility of the developed AIS where applicable and creating conditions for the formation of a national school for the development of AI technologies that includes publicly available national repositories of libraries and network models, available national development tools, open national frameworks, etc. They are also encouraged to share information on the best practices in the development of AI technologies and organize and hold conferences, hackathons and public competitions, as well as high school and student Olympiads. They should increase the availability of knowledge and encourage the use of open knowledge databases, creating conditions for attracting investments in the development of AI technologies from Russian private investors, business angels, venture funds and private equity funds while stimulating scientific and educational activities in the field of AI by participating in the projects and activities of leading Russian research centers and educational organizations.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021