· 1. Accountability

Good AI governance should include accountability mechanisms, which could be very diverse in choice depending on the goals. Mechanisms can range from monetary compensation (no fault insurance) to fault finding, to reconciliation without monetary compensations. The choice of accountability mechanisms may also depend on the nature and weight of the activity, as well as the level of autonomy at play. An instance in which a system misreads a medicine claim and wrongly decides not to reimburse may be compensated for with money. In a case of discrimination, however, an explanation and apology might be at least as important.
Principle: Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence

Related Principles

A Accountability

Accountability is central to the definition of good practice in corporate governance. It implies that there should always be a line of responsibility for business actions to establish who has to answer for the consequences. AI systems introduce an additional strand of complexity: who is responsible for the outcome of the decision making process of an artificial agent? It is difficult to provide a univocal answer and a rich debate has flourished on this topic. Although the question of responsibility remain largely unanswered, a valuable approach would be for each of the parties involved as if they were ultimately responsible.

Published by Institute of Business Ethics (IBE) in IBE interactive framework of fundamental values and principles for the use of Artificial Intelligence (AI) in business, Jan 11, 2018

3.3 Prohibition of damages

The artificial intelligence svstem must comply with safety standards, that is, it must contain appropriate mechanisms that will prevent damage to persons and their property. In the event that damage does occur, it must be repaired in the shortest possible time, and the injured person compensated in the manner established by law. The Law on Obligations regulates the notion of damage as "decreasing one's property (ordinary damage) and preventing its increase (lost benefit), as well as causing physical or psychological pain or fear to another (non.material damage) and establishes that every person is obliged to refrain from actions that can cause damage to others. In addition to civil liability, the law also recognizes the criminal and misdemeanor liability of both natural and legal persons for the damage they cause to another person The Criminal Code provides for a large number of criminal acts, of which it is important to mention criminal acts against life and body, people's property, against the freedoms and rights of people and citizens. Special the law also provides for the liability of persons for the damage they cause by committing an act of lesser social danger a misdemeanot. Special attention should be paid to the protection of sensitive categories such as the elderly, persons with disabilities, children, pregnant women, etc., as well as categories that are in a less favorable position (for example: worker employer, consumer economic entity, etc.). Artificial intelligence systems must be used in safe and secure manner, i.e. they must be reliable and secure,and their use for malicious purpose should be prevented

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

3.4 Fairness

The principle of fairness refers to the protection of rights and integrity against discrimination, especially discrimination of particularly sensitive categorie.(for example, persons with disabilities). The term itself due to its versatility, it has different interpretations in numerous spheres of social life. For example, in health care, the principle of fairness implies the prohibition of discrimination based on race, sex, gender, sexual orientation and gender identity, age, nationality, social origin, religion, political or other conviction, financial status, culture, language, health status, type of illness, mental or physical disabilty, as well as other personal characteristics that maybe the cause of discrimination. Likewise, artificial intelligence systems must prevent discrimination when in use. The principle of fairness has its real (eng. substantive) and procedural dimension. The real dimension includes protection against unjustified bias, discrimination and stigmatization. Artificial intelligencesystems should provide equal opportunities to all persons, both in terms of access to education, goodsand services and technologies, as well as to prevent deception of persons using artificial intelligencesystems, when making decisions. The procedural dimension of fairness includes the ability to challenge and include effective legal protection against decisions resulting from the operation of the Artificial Intelligence System As well as persons responsible for the operation of the System. In order to fulfill this condition, it is necessary that they exist clearly defined responsibilities, as well as for the decision making process to be explained, clear andtransparent. This reduces the possibility of misunderstanding or incomplete understanding of the purpose and goals of using these systems, that is, the potential denial of freedom of choice when choosing the system to use. The fair use of Artificial lnteligence Systems can lead to an increase in fairness in society as a whole, as well as to a reduction of the differences that exist between individuals in terms of social, economic and educational status

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

· Transparency and explainability

37. The transparency and explainability of AI systems are often essential preconditions to ensure the respect, protection and promotion of human rights, fundamental freedoms and ethical principles. Transparency is necessary for relevant national and international liability regimes to work effectively. A lack of transparency could also undermine the possibility of effectively challenging decisions based on outcomes produced by AI systems and may thereby infringe the right to a fair trial and effective remedy, and limits the areas in which these systems can be legally used. 38. While efforts need to be made to increase transparency and explainability of AI systems, including those with extra territorial impact, throughout their life cycle to support democratic governance, the level of transparency and explainability should always be appropriate to the context and impact, as there may be a need to balance between transparency and explainability and other principles such as privacy, safety and security. People should be fully informed when a decision is informed by or is made on the basis of AI algorithms, including when it affects their safety or human rights, and in those circumstances should have the opportunity to request explanatory information from the relevant AI actor or public sector institutions. In addition, individuals should be able to access the reasons for a decision affecting their rights and freedoms, and have the option of making submissions to a designated staff member of the private sector company or public sector institution able to review and correct the decision. AI actors should inform users when a product or service is provided directly or with the assistance of AI systems in a proper and timely manner. 39. From a socio technical lens, greater transparency contributes to more peaceful, just, democratic and inclusive societies. It allows for public scrutiny that can decrease corruption and discrimination, and can also help detect and prevent negative impacts on human rights. Transparency aims at providing appropriate information to the respective addressees to enable their understanding and foster trust. Specific to the AI system, transparency can enable people to understand how each stage of an AI system is put in place, appropriate to the context and sensitivity of the AI system. It may also include insight into factors that affect a specific prediction or decision, and whether or not appropriate assurances (such as safety or fairness measures) are in place. In cases of serious threats of adverse human rights impacts, transparency may also require the sharing of code or datasets. 40. Explainability refers to making intelligible and providing insight into the outcome of AI systems. The explainability of AI systems also refers to the understandability of the input, output and the functioning of each algorithmic building block and how it contributes to the outcome of the systems. Thus, explainability is closely related to transparency, as outcomes and ub processes leading to outcomes should aim to be understandable and traceable, appropriate to the context. AI actors should commit to ensuring that the algorithms developed are explainable. In the case of AI applications that impact the end user in a way that is not temporary, easily reversible or otherwise low risk, it should be ensured that the meaningful explanation is provided with any decision that resulted in the action taken in order for the outcome to be considered transparent. 41. Transparency and explainability relate closely to adequate responsibility and accountability measures, as well as to the trustworthiness of AI systems.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021

4 Foster responsibility and accountability

Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance; this helps to ensure that health care providers can use an AI technology responsibly. Although AI technologies perform specific tasks, it is the responsibility of human stakeholders to ensure that they can perform those tasks and that they are used under appropriate conditions. Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. In human warranty, regulatory principles are applied upstream and downstream of the algorithm by establishing points of human supervision. The critical points of supervision are identified by discussions among professionals, patients and designers. The goal is to ensure that the algorithm remains on a machine learning development path that is medically effective, can be interrogated and is ethically responsible; it involves active partnership with patients and the public, such as meaningful public consultation and debate (101). Ultimately, such work should be validated by regulatory agencies or other supervisory authorities. When something does go wrong in application of an AI technology, there should be accountability. Appropriate mechanisms should be adopted to ensure questioning by and redress for individuals and groups adversely affected by algorithmically informed decisions. This should include access to prompt, effective remedies and redress from governments and companies that deploy AI technologies for health care. Redress should include compensation, rehabilitation, restitution, sanctions where necessary and a guarantee of non repetition. The use of AI technologies in medicine requires attribution of responsibility within complex systems in which responsibility is distributed among numerous agents. When medical decisions by AI technologies harm individuals, responsibility and accountability processes should clearly identify the relative roles of manufacturers and clinical users in the harm. This is an evolving challenge and remains unsettled in the laws of most countries. Institutions have not only legal liability but also a duty to assume responsibility for decisions made by the algorithms they use, even if it is not feasible to explain in detail how the algorithms produce their results. To avoid diffusion of responsibility, in which “everybody’s problem becomes nobody’s responsibility”, a faultless responsibility model (“collective responsibility”), in which all the agents involved in the development and deployment of an AI technology are held responsible, can encourage all actors to act with integrity and minimize harm. In such a model, the actual intentions of each agent (or actor) or their ability to control an outcome are not considered.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021