Fairness and inclusion
AI systems should make the same recommendations for everyone with similar characteristics or qualifications. Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on. AI should adopt inclusive design efforts to anticipate any potential deployment issues that could unintentionally exclude people. Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities. Governments should review the impact of workplace, governmental and social AI on the opportunities and rights of poor people, Indigenous peoples and vulnerable members of society. In particular, the impact of overlapping AI systems toward profiling and marginalization should be identified and countered.
(1) Human centric
Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms.
AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI.
AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument.
When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue.
In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.
2. The Principle of Non maleficence: “Do no Harm”
Published by: The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI
AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism.
Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other).
Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.
7. We engage with the wider societal challenges of AI
While we have control, to a large extent, over the preceding areas, there are numerous emerging challenges that require a much broader discourse across industries, disciplines, borders, and cultural, philosophical, and religious traditions. These include, but are not limited to, questions concerning:
Economic impact, such as how industry and society can collaborate to prepare students and workers for an AI economy and how society may need to adapt means of economic redistribution, social safety, and economic development.
Social impact, such as the value and meaning of work for people and the potential role of AI software as social companions and caretakers.
Normative questions around how AI should confront ethical dilemmas and what applications of AI, specifically with regards to security and safety, should be considered permissible.
We look forward to making SAP one of many active voices in these debates by engaging with our AI Ethics Advisory Panel and a wide range of partnerships and initiatives.
1. Fair AI
We seek to ensure that the applications of AI technology lead to fair results. This means that they should not lead to discriminatory impacts on people in relation to race, ethnic origin, religion, gender, sexual orientation, disability or any other personal condition. We will apply technology to minimize the likelihood that the training data sets we use create or reinforce unfair bias or discrimination.
When optimizing a machine learning algorithm for accuracy in terms of false positives and negatives, we will consider the impact of the algorithm in the specific domain.