(Preamble)

You can use our 8 principles when designing, developing, integrating or using artificial intelligence (AI) systems to: achieve better outcomes reduce the risk of negative impact practice the highest standards of ethical business and good governance The principles are voluntary. They are aspirational and intended to complement–not substitute–existing AI related regulations. Read how and when you can apply them.
Principle: AI Ethics Principles, Nov 7, 2019

Published by Department of Industry, Innovation and Science, Australian Government

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· 2) Research Funding

Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people’s resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

· 2.2 Flexible Regulatory Approach

We encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI. As applications of AI technologies vary widely, overregulating can inadvertently reduce the number of technologies created and offered in the marketplace, particularly by startups and smaller businesses. We encourage policymakers to recognize the importance of sector specific approaches as needed; one regulatory approach will not fit all AI applications. We stand ready to work with policymakers and regulators to address legitimate concerns where they occur.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

1. We are driven by our values

We recognize that, like with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing. In developing AI software we will remain true to our Human Rights Commitment Statement, the UN Guiding Principles on Business and Human Rights, laws, and widely accepted international norms. Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavor to prevent the inappropriate use of our technology.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

(Preamble)

New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018