The principle "OECD Principles on Artificial Intelligence" has mentioned the topic "safety" in the following places:

    1.4. Robustness, security and safety

    Robustness, security and safety

    1.4. Robustness, security and safety

    a) AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.

    1.4. Robustness, security and safety

    a) AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.

    1.4. Robustness, security and safety

    c) AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

    2.2. Fostering a digital ecosystem for AI

    In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.

    2.3 Shaping an enabling policy environment for AI

    To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled up, as appropriate.

    2.4. Building human capacity and preparing for labor market transformation

    c) Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.