Be Accountable.

Consider the potential negative consequences of the AI tools we build. Anticipate what might cause potential direct or indirect harm and engineer to avoid and minimize these problems.
Principle: Unity’s Guiding Principles for Ethical AI, Nov 28, 2018

Published by Unity Technologies

Related Principles

Responsibility:

We will approach designing and maintaining our AI technology with thoughtful evaluation and careful consideration of the impact and consequences of its deployment. We will ensure that we design for inclusiveness and assess the impact of potentially unfair, discriminatory, or inaccurate results, which might perpetuate harmful biases and stereotypes. We understand that special care must be taken to address bias if a product or service will have a significant impact on an individual's life, such as with employment, housing, credit, and health.

Published by Adobe in AI Ethics Principles, Feb 17, 2021

· ④ Prevention of Harm

AI should not be used for the purpose of inflicting direct or indirect harm on humans. Efforts should be made to develop measures to handle risks and negative consequences associated with AI.

Published by The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI) in National AI Ethical Guidelines, Dec 23, 2020

4. Risk Assessment and Management

Regulatory and non regulatory approaches to AI should be based on a consistent application of risk assessment and risk management across various agencies and various technologies. It is not necessary to mitigate every foreseeable risk; in fact, a foundational principle of regulatory policy is that all activities involve tradeoffs. Instead, a risk based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits. Agencies should be transparent about their evaluations of risk and re evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability. Correspondingly, the magnitude and nature of the consequences should an AI tool fail, or for that matter succeed, can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks. Specifically, agencies should follow the direction in Executive Order 12866, “Regulatory Planning and Review,”to consider the degree and nature of the risks posed by various activities within their jurisdiction. Such an approach will, where appropriate, avoid hazard based and unnecessarily precautionary approaches to regulation that could unjustifiably inhibit innovation.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020

4. Risk Assessment and Management

Regulatory and non regulatory approaches to AI should be based on a consistent application of risk assessment and risk management across various agencies and various technologies. It is not necessary to mitigate every foreseeable risk; in fact, a foundational principle of regulatory policy is that all activities involve tradeoffs. Instead, a risk based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits. Agencies should be transparent about their evaluations of risk and re evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability. Correspondingly, the magnitude and nature of the consequences should an AI tool fail, or for that matter succeed, can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks. Specifically, agencies should follow the direction in Executive Order 12866, “Regulatory Planning and Review,”to consider the degree and nature of the risks posed by various activities within their jurisdiction. Such an approach will, where appropriate, avoid hazard based and unnecessarily precautionary approaches to regulation that could unjustifiably inhibit innovation.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020