Accountability:

We take ownership over the outcomes of our AI assisted tools. We will have processes and resources dedicated to receiving and responding to concerns about our AI and taking corrective action as appropriate. Accountability also entails testing for and anticipating potential harms, taking preemptive steps to mitigate such harms, and maintaining systems to respond to unanticipated harmful outcomes.
Principle: AI Ethics Principles, Feb 17, 2021

Published by Adobe

Related Principles

Responsibility:

We will approach designing and maintaining our AI technology with thoughtful evaluation and careful consideration of the impact and consequences of its deployment. We will ensure that we design for inclusiveness and assess the impact of potentially unfair, discriminatory, or inaccurate results, which might perpetuate harmful biases and stereotypes. We understand that special care must be taken to address bias if a product or service will have a significant impact on an individual's life, such as with employment, housing, credit, and health.

Published by Adobe in AI Ethics Principles, Feb 17, 2021

· 9. Safety

Safety is about ensuring that the system will indeed do what it is supposed to do, without harming users (human physical integrity), resources or the environment. It includes minimizing unintended consequences and errors in the operation of the system. Processes to clarify and assess potential risks associated with the use of AI products and services should be put in place. Moreover, formal mechanisms are needed to measure and guide the adaptability of AI systems.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

· 1.1 Responsible Design and Deployment

We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

Fourth principle: Bias and Harm Mitigation

Those responsible for AI enabled systems must proactively mitigate the risk of unexpected or unintended biases or harms resulting from these systems, whether through their original rollout, or as they learn, change or are redeployed. AI enabled systems offer significant benefits for Defence. However, the use of AI enabled systems may also cause harms (beyond those already accepted under existing ethical and legal frameworks) to those using them or affected by their deployment. These may range from harms caused by a lack of suitable privacy for personal data, to unintended military harms due to system unpredictability. Such harms may change over time as systems learn and evolve, or as they are deployed beyond their original setting. Of particular concern is the risk of discriminatory outcomes resulting from algorithmic bias or skewed data sets. Defence must ensure that its AI enabled systems do not result in unfair bias or discrimination, in line with the MOD’s ongoing strategies for diversity and inclusion. A principle of bias and harm mitigation requires the assessment and, wherever possible, the mitigation of these biases or harms. This includes addressing bias in algorithmic decision making, carefully curating and managing datasets, setting safeguards and performance thresholds throughout the system lifecycle, managing environmental effects, and applying strict development criteria for new systems, or existing systems being applied to a new context.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022

6 Promote artificial intelligence that is responsive and sustainable

Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used. Thus, identification of a health need requires that institutions and governments respond to that need and its context with appropriate technologies with the aim of achieving the public interest in health protection and promotion. When an AI technology is ineffective or engenders dissatisfaction, the duty to be responsive requires an institutional process to resolve the problem, which may include terminating use of the technology. Responsiveness also requires that AI technologies be consistent with wider efforts to promote health systems and environmental and workplace sustainability. AI technologies should be introduced only if they can be fully integrated and sustained in the health care system. Too often, especially in under resourced health systems, new technologies are not used or are not repaired or updated, thereby wasting scare resources that could have been invested in proven interventions. Furthermore, AI systems should be designed to minimize their ecological footprints and increase energy efficiency, so that use of AI is consistent with society’s efforts to reduce the impact of human beings on the earth’s environment, ecosystems and climate. Sustainability also requires governments and companies to address anticipated disruptions to the workplace, including training of health care workers to adapt to use of AI and potential job losses due to the use of automated systems for routine health care functions and administrative tasks.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021