3.3 Prohibition of damages

The artificial intelligence svstem must comply with safety standards, that is, it must contain appropriate mechanisms that will prevent damage to persons and their property. In the event that damage does occur, it must be repaired in the shortest possible time, and the injured person compensated in the manner established by law. The Law on Obligations regulates the notion of damage as "decreasing one's property (ordinary damage) and preventing its increase (lost benefit), as well as causing physical or psychological pain or fear to another (non.material damage) and establishes that every person is obliged to refrain from actions that can cause damage to others. In addition to civil liability, the law also recognizes the criminal and misdemeanor liability of both natural and legal persons for the damage they cause to another person The Criminal Code provides for a large number of criminal acts, of which it is important to mention criminal acts against life and body, people's property, against the freedoms and rights of people and citizens. Special the law also provides for the liability of persons for the damage they cause by committing an act of lesser social danger a misdemeanot. Special attention should be paid to the protection of sensitive categories such as the elderly, persons with disabilities, children, pregnant women, etc., as well as categories that are in a less favorable position (for example: worker employer, consumer economic entity, etc.). Artificial intelligence systems must be used in safe and secure manner, i.e. they must be reliable and secure,and their use for malicious purpose should be prevented
Principle: ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

Published by Republic of Serbia

Related Principles

· 2. The Principle of Non maleficence: “Do no Harm”

AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism. Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other). Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

2.1. Risk based approach. The level of attention to ethical issues in AI and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific technologies and AISs and the interests of individuals and society. Risk level assessment must take into account both the known and possible risks; in this case, the level of probability of threats should be taken into account as well as their possible scale in the short and long term. In the field of AI development, making decisions that are significant to society and the state should be accompanied by scientifically verified and interdisciplinary forecasting of socio economic consequences and risks, as well as by the examination of possible changes in the value and cultural paradigm of the development of society, while taking into account national priorities. In pursuance of this Code, the development and use of an AIS risk assessment methodology is recommended. 2.2. Responsible attitude. AI Actors should have a responsible approach to the aspects of AIS that influence society and citizens at every stage of the AIS life cycle. These include privacy; the ethical, safe and responsible use of personal data; the nature, degree and amount of damage that may follow as a result of the use of the technology and AIS; and the selection and use of companion hardware and software. In this case, the responsibility of the AI Actors must correspond to the nature, degree and amount of damage that may occur as a result of the use of technologies and AIS, while taking into account the role of the AI Actor in the life cycle of AIS, as well as the degree of possible and real impact of a particular AI Actor on causing damage, as well as its size. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, the occurrence of which the corresponding AI Actor can reasonably assume, measures should be taken to prevent or limit the occurrence of such consequences. To assess the moral acceptability of consequences and the possible measures to prevent them, Actors can use the provisions of this Code, including the mechanisms specified in Section 2. 2.4. No harm. AI Actors should not allow use of AI technologies for the purpose of causing harm to human life, the environment and or the health or property of citizens and legal entities. Any application of an AIS capable of purposefully causing harm to the environment, human life or health or the property of citizens and legal entities during any stage, including design, development, testing, implementation or operation, is unacceptable. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are informed of their interactions with the AIS when it affects their rights and critical areas of their lives and to ensure that such interactions can be terminated at the request of the user. 2.6. Data security AI Actors must comply with the legislation of the Russian Federation in the field of personal data and secrets protected by law when using an AIS. Furthermore, they must ensure the protection and protection of personal data processed by an AIS or AI Actors in order to develop and improve the AIS by developing and implementing innovative methods of controlling unauthorized access by third parties to personal data and using high quality and representative datasets from reliable sources and obtained without breaking the law. 2.7. Information security. AI Actors should provide the maximum possible protection against unauthorized interference in the work of the AI by third parties by introducing adequate information security technologies, including the use of internal mechanisms for protecting the AIS from unauthorized interventions and informing users and developers about such interventions. They must also inform users about the rules regarding information security when using the AIS. 2.8. Voluntary certification and Code compliance. AI Actors can implement voluntary certification for the compliance of the developed AI technologies with the standards established by the legislation of the Russian Federation and this Code. AI Actors can create voluntary certification and AIS labeling systems that indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AISs. AI Actors are encouraged to collaborate in the identification and verification of methods and forms of creating universal ("strong") AIS and the prevention of the possible threats that AIS carry. The use of "strong" AI technologies should be under the control of the state.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

3.2 Dignity

It is the duty of all members of society to mutually respect and protect this right as one of the basic and inviolable rights of every human being. Every individual has the right to protect own dignity: violation or non respect of this right is sanctioned by law. Human dignity (further: dignity) should be understood as a starting principle (principle) that focuses on the preservation of human integrity. Based on that premise, the persons to whom these Guidelines refershould at all times, regardless of the stage in which the concrete artificial intelligence solution isIdevelopment, application or use), keep in mind the person and his integrity as a central concept. In thisregard, it is necessary to develop systems that, at every stage, make it imperative to respect theperson's personality, his freedom and autonomy. Respecting human personality means creating a system that will respect the cognitive, social andcultural characteristics of each individual. The artificial inteligence systems that are being developed must be in accordance with the above, therefore it is necessary to take care that they cannot in any waylead to the subordination of man to the functions of the system, as well as endangering his dignity andintegrity. In order to ensure respect for the principle of dignity, artificial intelligence systems must not be such that in the processes of work and application they grossly ignore the autonomy of human choice. The Constitution of the Republic of Serbia emphasizes that dignity "is inviolable and everyone is obliged to respectand protect it., Evervone has the right to free personal development, if it does not violate the rights of others guaranteed by the Constitution. The Convention on Human Rights states the following: "Human dignity (dignity) is not only a basic human right but also the foundation of human rights." Human dignity is inherent in every human being. In the Republic of Serbia, this term is regulated in the following ways: "The dignity of the person (honor, reputation, or piety) of the person to whom the information refers.'it is legally protected.” "Whoever abuses another or treats him in a way that offends a human being. dignity, shall be punished by imprisonment for up to one year. "Work in the public interest is any socially useful work that does not offend human dignity and is not done for the purpose of making a profit." This principle emphasizes that the integrity and dignity of all who may be affected by the Artificial inteligence Systemmust be taken care of at all times, As it is a general concept, to which life, in addition to the law, gives different sides although the essence is the same, it is appropriate to attach to the concept itself: honor, reputation, that is, piety.

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

3.4 Fairness

The principle of fairness refers to the protection of rights and integrity against discrimination, especially discrimination of particularly sensitive categorie.(for example, persons with disabilities). The term itself due to its versatility, it has different interpretations in numerous spheres of social life. For example, in health care, the principle of fairness implies the prohibition of discrimination based on race, sex, gender, sexual orientation and gender identity, age, nationality, social origin, religion, political or other conviction, financial status, culture, language, health status, type of illness, mental or physical disabilty, as well as other personal characteristics that maybe the cause of discrimination. Likewise, artificial intelligence systems must prevent discrimination when in use. The principle of fairness has its real (eng. substantive) and procedural dimension. The real dimension includes protection against unjustified bias, discrimination and stigmatization. Artificial intelligencesystems should provide equal opportunities to all persons, both in terms of access to education, goodsand services and technologies, as well as to prevent deception of persons using artificial intelligencesystems, when making decisions. The procedural dimension of fairness includes the ability to challenge and include effective legal protection against decisions resulting from the operation of the Artificial Intelligence System As well as persons responsible for the operation of the System. In order to fulfill this condition, it is necessary that they exist clearly defined responsibilities, as well as for the decision making process to be explained, clear andtransparent. This reduces the possibility of misunderstanding or incomplete understanding of the purpose and goals of using these systems, that is, the potential denial of freedom of choice when choosing the system to use. The fair use of Artificial lnteligence Systems can lead to an increase in fairness in society as a whole, as well as to a reduction of the differences that exist between individuals in terms of social, economic and educational status

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

4 Foster responsibility and accountability

Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance; this helps to ensure that health care providers can use an AI technology responsibly. Although AI technologies perform specific tasks, it is the responsibility of human stakeholders to ensure that they can perform those tasks and that they are used under appropriate conditions. Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. In human warranty, regulatory principles are applied upstream and downstream of the algorithm by establishing points of human supervision. The critical points of supervision are identified by discussions among professionals, patients and designers. The goal is to ensure that the algorithm remains on a machine learning development path that is medically effective, can be interrogated and is ethically responsible; it involves active partnership with patients and the public, such as meaningful public consultation and debate (101). Ultimately, such work should be validated by regulatory agencies or other supervisory authorities. When something does go wrong in application of an AI technology, there should be accountability. Appropriate mechanisms should be adopted to ensure questioning by and redress for individuals and groups adversely affected by algorithmically informed decisions. This should include access to prompt, effective remedies and redress from governments and companies that deploy AI technologies for health care. Redress should include compensation, rehabilitation, restitution, sanctions where necessary and a guarantee of non repetition. The use of AI technologies in medicine requires attribution of responsibility within complex systems in which responsibility is distributed among numerous agents. When medical decisions by AI technologies harm individuals, responsibility and accountability processes should clearly identify the relative roles of manufacturers and clinical users in the harm. This is an evolving challenge and remains unsettled in the laws of most countries. Institutions have not only legal liability but also a duty to assume responsibility for decisions made by the algorithms they use, even if it is not feasible to explain in detail how the algorithms produce their results. To avoid diffusion of responsibility, in which “everybody’s problem becomes nobody’s responsibility”, a faultless responsibility model (“collective responsibility”), in which all the agents involved in the development and deployment of an AI technology are held responsible, can encourage all actors to act with integrity and minimize harm. In such a model, the actual intentions of each agent (or actor) or their ability to control an outcome are not considered.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021