5. We will engage with and have representation from stakeholders in the business community to help ensure that domain specific concerns and opportunities are understood and addressed.

Principle: Partnership on AI: Tenets, Sep 28, 2016 (unconfirmed)

Published by Partnership on AI

Related Principles

9. We share and enlighten.

We acknowledge the transformative power of AI for our society. We will support people and society in preparing for this future world. We live our digital responsibility by sharing our knowledge, pointing out the opportunities of the new technology without neglecting its risks. We will engage with our customers, other companies, policy makers, education institutions and all other stakeholders to ensure we understand their concerns and needs and can setup the right safeguards. We will engage in AI and ethics education. Hereby preparing ourselves, our colleagues and our fellow human beings for the new tasks ahead. Many tasks that are being executed by humans now will be automated in the future. This leads to a shift in the demand of skills. Jobs will be reshaped, rather replaced by AI. While this seems certain, the minority knows what exactly AI technology is capable of achieving. Prejudice and sciolism lead to either demonization of progress or to blind acknowledgment, both calling for educational work. We as Deutsche Telekom feel responsible to enlighten people and help society to deal with the digital shift, so that new appropriate skills can be developed and new jobs can be taken over. And we start from within – by enabling our colleagues and employees. But we are aware that this task cannot be solved by one company alone. Therefore we will engage in partnerships with other companies, offer our know how to policy makers and education providers to jointly tackle the challenges ahead.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

We welcome the measures to increase the number of computer science teachers in secondary schools and we urge the Government to ensure that there is support for teachers with associated skills and subjects such as mathematics to retrain. At earlier stages of education, children need to be adequately prepared for working with, and using, AI. For all children, the basic knowledge and understanding necessary to navigate an AI driven world will be essential. AI will have significant implications for the ways in which society lives and works. AI may accelerate the digital disruption in the jobs market. Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. A significant Government investment in skills and training is needed if this disruption is to be navigated successfully and to the benefit of the working population and national productivity growth.

Published by House of Lords of United Kingdom, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

4. Accountable and responsible

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted. Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time. Why it matters Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility. While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them. Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

5. Human centric

AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged. Why it matters Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later. Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies. Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

2. Stakeholder Engagement

In order to solve the challenges arising from use of AI while striving for better AI utilization, Sony will seriously consider the interests and concerns of various stakeholders including its customers and creators, and proactively advance a dialogue with related industries, organizations, academic communities and more. For this purpose, Sony will construct the appropriate channels for ensuring that the content and results of these discussions are provided to officers and employees, including researchers and developers, who are involved in the corresponding businesses, as well as for ensuring further engagement with its various stakeholders.

Published by Sony Group in Sony Group AI Ethics Guidelines, Sep 25, 2018