Transparent and open

We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre determine the outcome of studies we commission. When we collaborate or co publish with external researchers, we will disclose whether they have received funding from us. Any published academic papers produced by the Ethics & Society team will be made available through open access schemes.
Principle: DeepMind Ethics & Society Principles, Oct 3, 2017 (unconfirmed)

Published by DeepMind

Related Principles

9. We share and enlighten.

We acknowledge the transformative power of AI for our society. We will support people and society in preparing for this future world. We live our digital responsibility by sharing our knowledge, pointing out the opportunities of the new technology without neglecting its risks. We will engage with our customers, other companies, policy makers, education institutions and all other stakeholders to ensure we understand their concerns and needs and can setup the right safeguards. We will engage in AI and ethics education. Hereby preparing ourselves, our colleagues and our fellow human beings for the new tasks ahead. Many tasks that are being executed by humans now will be automated in the future. This leads to a shift in the demand of skills. Jobs will be reshaped, rather replaced by AI. While this seems certain, the minority knows what exactly AI technology is capable of achieving. Prejudice and sciolism lead to either demonization of progress or to blind acknowledgment, both calling for educational work. We as Deutsche Telekom feel responsible to enlighten people and help society to deal with the digital shift, so that new appropriate skills can be developed and new jobs can be taken over. And we start from within – by enabling our colleagues and employees. But we are aware that this task cannot be solved by one company alone. Therefore we will engage in partnerships with other companies, offer our know how to policy makers and education providers to jointly tackle the challenges ahead.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

· 2) Research Funding

Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people’s resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

Transparency

Review mechanisms will ensure citizens can question and challenge AI based outcomes Not only must the people of NSW have high levels of assurance that data is being used safely and in accordance with relevant legislation, they must also have access to an efficient and transparent review mechanism if there are questions about the use of data or AI informed outcomes. The development of AI solutions must be robust technically, legally and ethically. The community should be engaged on the objectives of AI projectsand insights into data use and methodology should be made publicly available unless there is an overriding public interest in not doing so. Projects should clearly demonstrate: a publicly available project objective and planned outcomes how the public can question and seek reviews of AI based decisions how the community can get insights into data use and methodology how the community will be informed of changes to an AI solution, including where existing technology is adapted for another purpose.

Published by Government of New South Welsh, Australia in Mandatory Ethical Principles for the use of AI, 2024

2. Transparent and explainable AI

We will be explicit about the kind of personal and or non personal data the AI systems uses as well as about the purpose the data is used for. When people directly interact with an AI system, we will be transparent to the users that this is the case. When AI systems take, or support, decisions we take the technical and organizational measures required to guarantee a level of understanding adequate to the application area. In any case, if the decisions significantly affect people's lives, we will ensure we understand the logic behind the conclusions. This will also apply when we use third party technology.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018