Proportionality and harmlessness.
It should be recognised that AI technologies do not necessarily, in and of themselves, guarantee the prosperity of humans or the environment and ecosystems. In the event that any harm to humans may occur, risk assessment procedures should be applied and measures taken to prevent such harm from occurring. In other words, for a human person to be legally responsible for the decisions he or she makes to carry out one or more actions, there must be discernment (full human mental faculties), intention (human drive or desire) and freedom (to act in a calculated and premeditated manner). Therefore, to avoid falling into anthropomorphisms that could hinder eventual regulations and or wrong attributions, it is important to establish the conception of artificial intelligences as artifices, that is, as technology, a thing, an artificial means to achieve human objectives but which should not be confused with a human person. That is, the algorithm can execute, but the decision must necessarily fall on the person and therefore, so must the responsibility. Consequently, it emerges that an algorithm does not possess self determination and or agency to make decisions freely (although many times in colloquial language the concept of "decision" is used to describe a classification executed by an algorithm after training), and therefore it cannot be held responsible for the actions that are executed through said algorithm in question.
Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023