Linking Artificial Intelligence Principles
The development of artificial intelligence should ensure fairness and justice, avoid bias or discrimination against specific groups or individuals, and avoid placing disadvantaged people in an even more unfavorable position.
Personal data varies from those unjust use of which would be likely to greatly affect rights and benefits of individuals (Typically thought and creed, medical history, criminal record, etc.)
Under the "AI Ready society", when using AI, fair and transparent decision making and accountability for the results should be appropriately ensured, and trust in technology should be secured, in order that people using AI will not be discriminated on the ground of the person's background or treated unjustly in light of human dignity.
Finally, when unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress.
(d) justice, equity, and solidarity
AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and ‘autonomous’ systems can bring.
Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations.
Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g.
These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.
We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
The Principle of justice: “Be Fair”
For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair.
justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences.
Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability.
Fairness and justice, which are core issues in the stakeholder theory, remain paramount for ethical businesses when dealing with AI.
The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental physical abilities, sexual orientation, ethnic social origins and religious beliefs.
Artificial intelligence constitutes a major form of scientific and technological progress, which can generate considerable social benefits by improving living conditions and health, facilitating justice, creating wealth, bolstering public safety, and mitigating the impact of human activities on the environment and the climate.
4) AIS must avoid using acquired data to lock individuals into a user proﬁle, ﬁx their personal identity, or conﬁne them to a ﬁltering bubble, which would restrict and conﬁne their possibilities for personal development — especially in ﬁelds such as education, justice, or business.
Fairness and justice
The development of AI should promote fairness and justice, protect the rights and interests of all stakeholders, and promote equal opportunities.
These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.
This is particularly the case when there is a risk of causing discrimination or of unjustly impacting underrepresented groups.
Algorithmic justice: The development of artificial intelligence should avoid the harm caused to the public by algorithm design, and must clarify the algorithm motive and interpretability to overcome the unfair influence caused by algorithm design and data collection.