我们将致力于规划人工智能系统日益智能化的未来(We will plan for a future in which AI systems become increasingly intelligent)

1. Governance models should be developed for artificial general intelligence (AGI) and superintelligence 2. AGI and superintelligence, if developed, should serve humanity as a whole 3. Long term risks of AI should be identified and planned for 4. Recursively self improving AI development should be disclosed and tightly monitored and controlled for risk
原则: 迪拜的人工智能原则(Dubai's AI Principles), Jan 08, 2019

作者:Smart Dubai

相关原则

· Long term Planning

Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.

由 Beijing Academy of Artificial Intelligence (BAAI); Peking University; Tsinghua University; Institute of Automation, Chinese Academy of Sciences; Institute of Computing Technology, Chinese Academy of Sciences; Artifical Intelligence Industry Innovation Strategy Alliance (AITISA); etc. 在 人工智能北京共识(Beijing AI Principles)发表, May 25, 2019

· Focus on humans

Human control of AI should be mandatory and testable by regulators. AI should be developed with a focus on the human consequences as well as the economic benefits. A human impact review should be part of the AI development process, and a workplace plan for managing disruption and transitions should be part of the deployment process. Ongoing training in the workplace should be reinforced to help workers adapt. Governments should plan for transition support as jobs disappear or are significantly changed.

由 Centre for International Governance Innovation (CIGI), Canada 在 面向工作场所人工智能的G20框架(Toward a G20 Framework for Artificial Intelligence in the Workplace)发表, Jul 19, 2018

· (4) Security

Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today's technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole. Society must promote broad and deep research and development in AI (from immediate measures to deep understanding), such as the proper evaluation of risks in the utilization of AI and research to reduce risks. Society must also pay attention to risk management, including cybersecurity awareness. Society should always pay attention to sustainability in the use of AI. Society should not, in particular, be uniquely dependent on single AI or a few specified AI.

由 Cabinet Office, Government of Japan 在 以人类为中心的人工智能的社会原则(Social Principles of Human centric AI)发表, Dec 27, 2018

VI. Societal and environmental well being

For AI to be trustworthy, its impact on the environment and other sentient beings should be taken into account. Ideally, all humans, including future generations, should benefit from biodiversity and a habitable environment. Sustainability and ecological responsibility of AI systems should hence be encouraged. The same applies to AI solutions addressing areas of global concern, such as for instance the UN Sustainable Development Goals. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole. The use of AI systems should be given careful consideration particularly in situations relating to the democratic process, including opinion formation, political decision making or electoral contexts. Moreover, AI’s social impact should be considered. While AI systems can be used to enhance social skills, they can equally contribute to their deterioration.

由 European Commission 在 “可信赖的人工智能”的关键要求(Key requirements for trustworthy AI)发表, Apr 8, 2019

1. Artificial intelligence should be developed for the common good and benefit of humanity.

The UK must seek to actively shape AI's development and utilisation, or risk passively acquiescing to its many likely consequences. A shared ethical AI framework is needed to give clarity as to how AI can best be used to benefit individuals and society. By establishing these principles, the UK can lead by example in the international community. We recommend that the Government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.

由 House of Lords of United Kingdom, Select Committee on Artificial Intelligence 在 人工智能伦理准则(AI Code)发表, Apr 16, 2018