· Emergency Preparedness Agreements and Institutions

States should agree on technical and institutional measures required to prepare for advanced AI systems, regardless of their development timescale. To facilitate these agreements, we need an international body to bring together AI safety authorities, fostering dialogue and collaboration in the development and auditing of AI safety regulations across different jurisdictions. This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires. Over time, this body could also set standards for and commit to using verification methods to enforce domestic implementations of the Safety Assurance Framework. These methods can be mutually enforced through incentives and penalty mechanisms, such as conditioning access to markets on compliance with global standards. Experts and safety authorities should establish incident reporting and contingency plans, and regularly update the list of verified practices to reflect current scientific understanding. This body will be a critical initial coordination mechanism. In the long run, however, states will need to go further to ensure truly global governance of risks from advanced AI.
Principle: IDAIS-Venice, Sept 5, 2024

Published by IDAIS (International Dialogues on AI Safety)

Related Principles

· Governance

Comprehensive governance regimes are needed to ensure red lines are not breached by developed or deployed systems. We should immediately implement domestic registration for AI models and training runs above certain compute or capability thresholds. Registrations should ensure governments have visibility into the most advanced AI in their borders and levers to stem distribution and operation of dangerous models. Domestic regulators ought to adopt globally aligned requirements to prevent crossing these red lines. Access to global markets should be conditioned on domestic regulations meeting these global standards as determined by an international audit, effectively preventing development and deployment of systems that breach red lines. We should take measures to prevent the proliferation of the most dangerous technologies while ensuring broad access to the benefits of AI technologies. To achieve this we should establish multilateral institutions and agreements to govern AGI development safely and inclusively with enforcement mechanisms to ensure red lines are not crossed and benefits are shared broadly.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Beijing, May 10, 2024

· Consensus Statement on AI Safety as a Global Public Good

Rapid advances in artificial intelligence (AI) systems’ capabilities are pushing humanity closer to a world where AI meets and surpasses human intelligence. Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently. Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity. Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence. The global nature of these risks from AI makes it necessary to recognize AI safety as a global public good, and work towards global governance of these risks. Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time. Promising initial steps by the international community show cooperation on AI safety and governance is achievable despite geopolitical tensions. States and AI developers around the world committed to foundational principles to foster responsible development of AI and minimize risks at two intergovernmental summits. Thanks to these summits, states established AI Safety Institutes or similar institutions to advance testing, research and standards setting. These efforts are laudable and must continue. States must sufficiently resource AI Safety Institutes, continue to convene summits and support other global governance efforts. However, states must go further than they do today. As an initial step, states should develop authorities to detect and respond to AI incidents and catastrophic risks within their jurisdictions. These domestic authorities should coordinate to develop a global contingency plan to respond to severe AI incidents and catastrophic risks. In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks. Deep and foundational research needs to be conducted to guarantee the safety of advanced AI systems. This work must begin swiftly to ensure they are developed and validated prior to the advent of advanced AIs. To enable this, we call on states to carve out AI safety as a cooperative area of academic and technical activity, distinct from broader geostrategic competition on development of AI capabilities. The international community should consider setting up three clear processes to prepare for a world where advanced AI systems pose catastrophic risks:

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024

· Safety Assurance Framework

Frontier AI developers must demonstrate to domestic authorities that the systems they develop or deploy will not cross red lines such as those defined in the IDAIS Beijing consensus statement. To implement this, we need to build further scientific consensus on risks and red lines. Additionally, we should set early warning thresholds: levels of model capabilities indicating that a model may cross or come close to crossing a red line. This approach builds on and harmonizes the existing patchwork of voluntary commitments such as responsible scaling policies. Models whose capabilities fall below early warning thresholds require only limited testing and evaluation, while more rigorous assurance mechanisms are needed for advanced AI systems exceeding these early warning thresholds. Although testing can alert us to risks, it only gives us a coarse grained understanding of a model. This is insufficient to provide safety guarantees for advanced AI systems. Developers should submit a high confidence safety case, i.e., a quantitative analysis that would convince the scientific community that their system design is safe, as is common practice in other safety critical engineering disciplines. Additionally, safety cases for sufficiently advanced systems should discuss organizational processes, including incentives and accountability structures, to favor safety. Pre deployment testing, evaluation and assurance are not sufficient. Advanced AI systems may increasingly engage in complex multi agent interactions with other AI systems and users. This interaction may lead to emergent risks that are difficult to predict. Post deployment monitoring is a critical part of an overall assurance framework, and could include continuous automated assessment of model behavior, centralized AI incident tracking databases, and reporting of the integration of AI in critical systems. Further assurance should be provided by automated run time checks, such as by verifying that the assumptions of a safety case continue to hold and safely shutting down a model if operated in an out of scope environment. States have a key role to play in ensuring safety assurance happens. States should mandate that developers conduct regular testing for concerning capabilities, with transparency provided through independent pre deployment audits by third parties granted sufficient access to developers’ staff, systems and records necessary to verify the developer’s claims. Additionally, for models exceeding early warning thresholds, states could require that independent experts approve a developer’s safety case prior to further training or deployment. Moreover, states can help institute ethical norms for AI engineering, for example by stipulating that engineers have an individual duty to protect the public interest similar to those held by medical or legal professionals. Finally, states will also need to build governance processes to ensure adequate post deployment monitoring. While there may be variations in Safety Assurance Frameworks required nationally, states should collaborate to achieve mutual recognition and commensurability of frameworks.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024

· Independent Global AI Safety and Verification Research

Independent research into AI safety and verification is critical to develop techniques to ensure the safety of advanced AI systems. States, philanthropists, corporations and experts should enable global independent AI safety and verification research through a series of Global AI Safety and Verification Funds. These funds should scale to a significant fraction of global AI research and development expenditures to adequately support and grow independent research capacity. In addition to foundational AI safety research, these funds would focus on developing privacy preserving and secure verification methods, which act as enablers for domestic governance and international cooperation. These methods would allow states to credibly check an AI developer’s evaluation results, and whether mitigations specified in their safety case are in place. In the future, these methods may also allow states to verify safety related claims made by other states, including compliance with the Safety Assurance Frameworks and declarations of significant training runs. Eventually, comprehensive verification could take place through several methods, including third party governance (e.g., independent audits), software (e.g., audit trails) and hardware (e.g., hardware enabled mechanisms on AI chips). To ensure global trust, it will be important to have international collaborations developing and stress testing verification methods. Critically, despite broader geopolitical tensions, globally trusted verification methods have allowed, and could allow again, states to commit to specific international agreements.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024

6. Flexibility

When developing regulatory and non regulatory approaches, agencies should pursue performance based and flexible approaches that can adapt to rapid changes and updates to AI applications. Rigid, design based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence. Targeted agency conformity assessment schemes, to protect health and safety, privacy, and other values, will be essential to a successful, and flexible, performance based approach. To advance American innovation, agencies should keep in mind international uses of AI, ensuring that American companies are not disadvantaged by the United States’ regulatory regime.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020