Maximising the Benefits of AI While Managing the Disruption of its Implementation

Vodafone is a responsible employer and is determined to become a leading, human centric, digital business.
Principle: Vodafone's AI Framework, Jun 11, 2019

Published by Vodafone Group

Related Principles

(Preamble)

The Internet Society has developed the following principles and recommendations in reference to what we believe are the core “abilities” that underpin the value the Internet provides. While the deployment of AI in Internet based services is not new, the current trend points to AI as an increasingly important factor in the Internet’s future development and use. As such, these guiding principles and recommendations are a first attempt to guide the debate going forward. Furthermore, while this paper is focused on the specific challenges surrounding AI, the strong interdependence between its development and the expansion of the Internet of Things (IoT) demands a closer look at interoperability and security of IoT devices.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

Accountability

Decision making remains the responsibility of organisations and individuals AI is a powerful tool for analysing and looking for patterns in large quantities of data, undertaking high volume routine process work, or making recommendations based on complex information. However, AI based functions and decisions must always be subject to human review and intervention. Projects should clearly demonstrate: that the agency remains responsible for all AI informed decisions and will monitor them accordingly that human intervention in decision making and accountability in service delivery are key factors that AI projects are overseen by individuals with the relevant expertise in the technology and its benefits and risks that a review and assurance process has been put in place for both the development of the AI solution and its outcomes.

Published by Government of New South Welsh, Australia in Mandatory Ethical Principles for the use of AI, 2024

Plan and Design:

1 This step is crucial to design or procure an AI System in an accountable and responsible manner. The ethical responsibility and liability for the outcomes of the AI system should be attributable to stakeholders who are responsible for certain actions in the AI System Lifecycle. It is essential to set a robust governance structure that defines the authorization and responsibility areas of the internal and external stakeholders without leaving any areas of uncertainty to achieve this principle. The design approach of the AI system should respect human rights, and fundamental freedoms as well as the national laws and cultural values of the kingdom. 2 Organizations can put in place additional instruments such as impact assessments, risk mitigation frameworks, audit and due diligence mechanisms, redress, and disaster recovery plans. 3 It is essential to build and design a human controlled AI system where decisions on the processes and functionality of the technology are monitored and executed, and are susceptible to intervention from authorized users. Human governance and oversight establish the necessary control and levels of autonomy through set mechanisms.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

3. Human centric AI

AI should be at the service of society and generate tangible benefits for people. AI systems should always stay under human control and be driven by value based considerations. Telefónica is conscious of the fact that the implementation of AI in our products and services should in no way lead to a negative impact on human rights or the achievement of the UN’s Sustainable Development Goals. We are concerned about the potential use of AI for the creation or spreading of fake news, technology addiction, and the potential reinforcement of societal bias in algorithms in general. We commit to working towards avoiding these tendencies to the extent it is within our realm of control.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018

6 Promote artificial intelligence that is responsive and sustainable

Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used. Thus, identification of a health need requires that institutions and governments respond to that need and its context with appropriate technologies with the aim of achieving the public interest in health protection and promotion. When an AI technology is ineffective or engenders dissatisfaction, the duty to be responsive requires an institutional process to resolve the problem, which may include terminating use of the technology. Responsiveness also requires that AI technologies be consistent with wider efforts to promote health systems and environmental and workplace sustainability. AI technologies should be introduced only if they can be fully integrated and sustained in the health care system. Too often, especially in under resourced health systems, new technologies are not used or are not repaired or updated, thereby wasting scare resources that could have been invested in proven interventions. Furthermore, AI systems should be designed to minimize their ecological footprints and increase energy efficiency, so that use of AI is consistent with society’s efforts to reduce the impact of human beings on the earth’s environment, ecosystems and climate. Sustainability also requires governments and companies to address anticipated disruptions to the workplace, including training of health care workers to adapt to use of AI and potential job losses due to the use of automated systems for routine health care functions and administrative tasks.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021