Pursue deterrence, not escalation

We believe that thoughtfully designed technology can de escalate conflict instead of escalating it. Our products seek to give our users insight into how to prevent or defuse conflict. We consume, analyze, and summarize data beyond human capacity so that humans have more time and signal to deliberate before making critical decisions.
Principle: AI Ethical Principles, January 2023

Published by Rebelliondefense

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

8. We foster the cooperative model.

We believe that human and machine intelligence are complementary, with each bringing its own strength to the table. While we believe in a people first approach of human machine collaboration, we recognize, that humans can benefit from the strength of AI to unfold a potential that neither human or machine can unlock on its own. We recognize the widespread fear, that AI enabled machines will outsmart the human intelligence. We as Deutsche Telekom think differently. We know and believe in the human strengths like inspiration, intuition, sense making and empathy. But we also recognize the strengths of AI like data recall, processing speed and analysis. By combining both, AI systems will help humans to make better decisions and accomplish objectives more effective and efficient.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

(Preamble)

Google aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good. We believe that these technologies will promote innovation and further our mission to organize the world’s information and make it universally accessible and useful. We recognize that these same technologies also raise important challenges that we need to address clearly, thoughtfully, and affirmatively. These principles set out our commitment to develop technology responsibly and establish specific application areas we will not pursue.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

1. Broadly Distributed Benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

Published by OpenAI in OpenAI Charter, Apr 9, 2018

Practice holism and do not reduce our ethical focus to components

We provide integrated technologies to defend and support democracy. We do not fixate only on algorithms and data in a silo, but rather take a holistic view of the potential impact of AI on outcomes to avoid unintended consequences in the real world. We aim to ensure that the entire systems we develop have the capability to manage data quality while upholding governance around software and models. We routinely employ statistical analyses to search for unwarranted data, model, and outcome bias.

Published by Rebelliondefense in AI Ethical Principles, January 2023