The principle "IDAIS-Beijing" has mentioned the topic "safety" in the following places:

    Roadmap to Red Line Enforcement

    Ensuring these red lines are not crossed is possible, but will require a concerted effort to develop both improved governance regimes and technical safety methods.

    · Governance

    To achieve this we should establish multilateral institutions and agreements to govern AGI development safely and inclusively with enforcement mechanisms to ensure red lines are not crossed and benefits are shared broadly.

    · Measurement and Evaluation

    To ensure red line testing regimes keep pace with rapid AI development, we should invest in red teaming and automating model evaluation with appropriate human oversight.

    · Technical Collaboration

    We encourage building a stronger global technical network to accelerate AI safety R&D and collaborations through visiting researcher programs and organizing in depth AI safety conferences and workshops.

    · Technical Collaboration

    We encourage building a stronger global technical network to accelerate AI safety R&D and collaborations through visiting researcher programs and organizing in depth AI safety conferences and workshops.

    · Technical Collaboration

    Additional funding will be required to support the growth of this field: we call for AI developers and government funders to invest at least one third of their AI R&D budget in safety.

    Conclusion

    International scientific and government collaboration on safety must continue and grow.