The principle "ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE" has mentioned the topic "safety" in the following places:

    3.1 Explainability and verifiability

    Clarity in the context of these Guidelines means that all processes: development, testing, commissioning, system monitoring and shutdown must be transparent.

    3.1 Explainability and verifiability

    verifiability is a complementary element of this principle, which ensures that the System can check in all processes, ie.

    3.1 Explainability and verifiability

    verifiability includes the actions and procedures of checking artificial intelligence systems during testing and implementation, as well as checking the short term and long term impact that such a system has on humans.

    3.1 Explainability and verifiability

    Verifiability includes the actions and procedures of checking artificial intelligence systems during testing and implementation, as well as checking the short term and long term impact that such a system has on humans.

    3.3 Prohibition of damages

    The artificial intelligence svstem must comply with safety standards, that is, it must contain appropriate mechanisms that will prevent damage to persons and their property.

    3.3 Prohibition of damages

    Artificial intelligence systems must be used in safe and secure manner, i.e.