Social and Economic Impacts

Principle: Stakeholders should shape an environment where AI provides socio economic opportunities for all. Recommendations: All stakeholders should engage in an ongoing dialogue to determine the strategies needed to seize upon artificial intelligence’s vast socio economic opportunities for all, while mitigating its potential negative impacts. A dialogue could address related issues such as educational reform, universal income, and a review of social services.
Principle: Guiding Principles and Recommendations, Apr 18, 2017

Published by Internet Society

Related Principles

Human, social and environmental wellbeing

Throughout their lifecycle, AI systems should benefit individuals, society and the environment. This principle aims to clearly indicate from the outset that AI systems should be used for beneficial outcomes for individuals, society and the environment. AI system objectives should be clearly identified and justified. AI systems that help address areas of global concern should be encouraged, like the United Nation’s Sustainable Development Goals. Ideally, AI systems should be used to benefit all human beings, including future generations. AI systems designed for legitimate internal business purposes, like increasing efficiency, can have broader impacts on individual, social and environmental wellbeing. Those impacts, both positive and negative, should be accounted for throughout the AI system's lifecycle, including impacts outside the organisation.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

(d) Justice, equity, and solidarity

AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and ‘autonomous’ systems can bring. Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible. We need a concerted global effort towards equal access to ‘autonomous’ technologies and fair distribution of benefits and equal opportunities across and within societies. This includes the formulating of new models of fair distribution and benefit sharing apt to respond to the economic transformations caused by automation, digitalisation and AI, ensuring accessibility to core AI technologies, and facilitating training in STEM and digital disciplines, particularly with respect to disadvantaged regions and societal groups. Vigilance is required with respect to the downside of the detailed and massive data on individuals that accumulates and that will put pressure on the idea of solidarity, e.g. systems of mutual assistance such as in social insurance and healthcare. These processes may undermine social cohesion and give rise to radical individualism.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

1. The Principle of Beneficence: “Do Good”

AI systems should be designed and developed to improve individual and collective wellbeing. AI systems can do so by generating prosperity, value creation and wealth maximization and sustainability. At the same time, beneficent AI systems can contribute to wellbeing by seeking achievement of a fair, inclusive and peaceful society, by helping to increase citizen’s mental autonomy, with equal distribution of economic, social and political opportunity. AI systems can be a force for collective good when deployed towards objectives like: the protection of democratic process and rule of law; the provision of common goods and services at low cost and high quality; data literacy and representativeness; damage mitigation and trust optimization towards users; achievement of the UN Sustainable Development Goals or sustainability understood more broadly, according to the pillars of economic development, social equity, and environmental protection. In other words, AI can be a tool to bring more good into the world and or to help with the world’s greatest challenges.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

2. Continued attention and vigilance, as well as accountability, for the potential effects and consequences of, artificial intelligence systems should be ensured, in particular by:

a. promoting accountability of all relevant stakeholders to individuals, supervisory authorities and other third parties as appropriate, including through the realization of audit, continuous monitoring and impact assessment of artificial intelligence systems, and periodic review of oversight mechanisms; b. fostering collective and joint responsibility, involving the whole chain of actors and stakeholders, for example with the development of collaborative standards and the sharing of best practices, c. investing in awareness raising, education, research and training in order to ensure a good level of information on and understanding of artificial intelligence and its potential effects in society, and d. establishing demonstrable governance processes for all relevant actors, such as relying on trusted third parties or the setting up of independent ethics committees,

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

7. We engage with the wider societal challenges of AI

While we have control, to a large extent, over the preceding areas, there are numerous emerging challenges that require a much broader discourse across industries, disciplines, borders, and cultural, philosophical, and religious traditions. These include, but are not limited to, questions concerning: Economic impact, such as how industry and society can collaborate to prepare students and workers for an AI economy and how society may need to adapt means of economic redistribution, social safety, and economic development. Social impact, such as the value and meaning of work for people and the potential role of AI software as social companions and caretakers. Normative questions around how AI should confront ethical dilemmas and what applications of AI, specifically with regards to security and safety, should be considered permissible. We look forward to making SAP one of many active voices in these debates by engaging with our AI Ethics Advisory Panel and a wide range of partnerships and initiatives.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018