1. Fairness

The company will strive to apply the values of equality and diversity in AI system throughout its entire lifecycle. The company will strive not to reinforce nor propagate negative or unfair bias. The company will strive to provide easy access to all users.
Principle: Principles for AI Ethics, Apr 24, 2019 (unconfirmed)

Published by Samsung

Related Principles

Fairness

Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups. This principle aims to ensure that AI systems are fair and that they enable inclusion throughout their entire lifecycle. AI systems should be user centric and designed in a way that allows all people interacting with it to access the related products or services. This includes both appropriate consultation with stakeholders, who may be affected by the AI system throughout its lifecycle, and ensuring people receive equitable access and treatment. This is particularly important given concerns about the potential for AI to perpetuate societal injustices and have a disparate impact on vulnerable and underrepresented groups including, but not limited to, groups relating to age, disability, race, sex, intersex status, gender identity and sexual orientation. Measures should be taken to ensure the AI produced decisions are compliant with anti‐discrimination laws.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

7 DIVERSITY INCLUSION PRINCIPLE

The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences. 1) AIS development and use must not lead to the homogenization of society through the standardization of behaviours and opinions. 2) From the moment algorithms are conceived, AIS development and deployment must take into consideration the multitude of expressions of social and cultural diversity present in the society. 3) AI development environments, whether in research or industry, must be inclusive and reflect the diversity of the individuals and groups of the society. 4) AIS must avoid using acquired data to lock individuals into a user profile, fix their personal identity, or confine them to a filtering bubble, which would restrict and confine their possibilities for personal development — especially in fields such as education, justice, or business. 5) AIS must not be developed or used with the aim of limiting the free expression of ideas or the opportunity to hear diverse opinions, both of which being essential conditions of a democratic society. 6) For each service category, the AIS offering must be diversified to prevent de facto monopolies from forming and undermining individual freedoms.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

· 3. A.I. must maximize efficiencies without destroying the dignity of people

It should preserve cultural commitments, empowering diversity. We need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future.

Published by Satya Nadella, CEO of Microsoft in 10 AI rules, Jun 28, 2016

2. Good and fair

Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non discrimination, equality, and fairness. Why it matters Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

3. Accountability

The company will strive to apply the principles of social and ethical responsibility to AI system AI system will be adequately protected and have security measures to prevent data breach and cyber attacks. The company will strive to benefit the society and promote the corporate citizenship though AI system.

Published by Samsung in Principles for AI Ethics, Apr 24, 2019 (unconfirmed)