Linking Artificial Intelligence Principles
The development of artificial intelligence should ensure fairness and justice, avoid bias or discrimination against specific groups or individuals, and avoid placing disadvantaged people in an even more unfavorable position.
Continually test and validate algorithms, so that they do not discriminate against users based on race, gender, nationality, age, religious beliefs, etc.
This may include, but not limited to: making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability and predictability, and making the system more traceable, auditable and accountable.
Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities.
Under the AI design concept, all people must be treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, sex, nationality, age, political beliefs, religion, etc.
To allow individuals to trust the data processing, it must be ensured that they have full control over their own data, and that data concerning them will not be used to harm or discriminate against them.
V. Diversity, non discrimination and fairness
The continuation of such biases could lead to (in)direct discrimination.
discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible.
These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.
To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling.
Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination.
Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination.
In a case of discrimination, however, an explanation and apology might be at least as important.
discrimination concerns the variability of AI results between individuals or groups of people based on the exploitation of differences in their characteristics that can be considered either intentionally or unintentionally (such as ethnicity, gender, sexual orientation or age), which may negatively impact such individuals or groups.
Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups.
Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons.
discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models.
The lack of reproducibility can lead to unintended discrimination in AI decisions.
Poor governance, by which it becomes possible to intentionally or unintentionally tamper with the data, or grant access to the algorithms to unauthorised entities, can also result in discrimination, erroneous decisions, or even physical harm.
Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms.
• Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.
Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI.
It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems.
B) Attention to unfair discrimination by algorithm
What types of discrimination could AI create or exacerbate?
The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental physical abilities, sexual orientation, ethnic social origins and religious beliefs.
1) AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on — among other things — social, sexual, ethnic, cultural, or religious differences.
must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.
These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.
This is particularly the case when there is a risk of causing discrimination or of unjustly impacting underrepresented groups.
In its utilization of AI, Sony will respect diversity, in areas such as ethnicity, culture, region, religion and beliefs, and human rights of its customers and other stakeholders without any discrimination while striving to contribute to the resolution of social problems through its activities in its own and related industries.
This means that they should not lead to discriminatory impacts on people in relation to race, ethnic origin, religion, gender, sexual orientation, disability or any other personal condition.
We will apply technology to minimize the likelihood that the training data sets we use create or reinforce unfair bias or discrimination.
Formulate guidelines and principles on solving bias and discrimination, potential mechanisms include algorithmic transparency, quality review, impact assessment, algorithmic audit, supervision and review, ethical board, etc.
Respect individuals' rights, such as data privacy, expression and information freedom, non discrimination, etc.
Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair.
AI algorithms must be traceable and transparent and there should be no algorithm discrimination;
In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm.