1. Legitimacy

Technology should be a positive force. We are committed to being a responsible corporate citizen, respecting applicable laws and generally accepted ethical principles for the well being of society.
Principle: Artificial Intelligence Application Criteria, Jul 8, 2019

Published by Megvii

Related Principles

1. Principle 1 — Human Rights

Issue: How can we ensure that A IS do not infringe upon human rights? [Candidate Recommendations] To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans: 1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A IS. 2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks. 3. For the foreseeable future, A IS should not be granted rights and privileges equal to human rights: A IS should always be subordinate to human judgment and control.

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

6. Democracy

[QUESTIONS] How should AI research and its applications, at the institutional level, be controlled? In what areas would this be most pertinent? Who should decide, and according to which modalities, the norms and moral values determining this control? Who should establish ethical guidelines for self driving cars? Should ethical labeling that respects certain standards be developed for AI, websites and businesses? [PRINCIPLES] ​The development of AI should promote informed participation in public life, cooperation and democratic debate.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

1. We are driven by our values

We recognize that, like with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing. In developing AI software we will remain true to our Human Rights Commitment Statement, the UN Guiding Principles on Business and Human Rights, laws, and widely accepted international norms. Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavor to prevent the inappropriate use of our technology.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

9. Ban the Attribution of Responsibility to Robots

Robots should be designed and operated as far as is practicable to comply with existing laws, fundamental rights and freedoms, including privacy. This is linked to the question of legal responsibility. In line with Bryson et al 2011, UNI Global Union asserts that legal responsibility for a robot should be attributed to a person. Robots are not responsible parties under the law.

Published by UNI Global Union in Top 10 Principles For Ethical Artificial Intelligence, Dec 11, 2017