Right to privacy and data protection.

It is important that data for AI systems is collected, used, shared, archived and deleted in a manner consistent with international law and in accordance with the values and these principles stated, while respecting relevant national, regional and international legal frameworks.
Principle: Recommendations for reliable artificial intelligence, Jnue 2, 2023

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES

Related Principles

(f) Rule of law and accountability

Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy. The whole range of legal challenges arising in the field should be addressed with timely investment in the development of robust solutions that provide a fair and clear allocation of responsibilities and efficient mechanisms of binding law. In this regard, governments and international organisations ought to increase their efforts in clarifying with whom liabilities lie for damages caused by undesired behaviour of ‘autonomous’ systems. Moreover, effective harm mitigation systems should be in place.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

4. Privacy and security by design

AI systems are fuelled by data, and Telefónica is committed to respecting people’s right to privacy and their personal data. The data used in AI systems can be personal or anonymous aggregated. When processing personal data, according to Telefónica’s privacy policy, we will at all times comply with the principles of lawfulness, fairness and transparency, data minimisation, accuracy, storage limitation, integrity and confidentiality. When using anonymized and or aggregated data, we will use the principles set out in this document. In order to ensure compliance with our Privacy Policy we use a Privacy by Design methodology. When building AI systems, as with other systems, we follow Telefónica’s Security by Design approach. We apply, according to Telefónica’s privacy policy, in all of the processing cycle phases, the technical and organizational measures required to guarantee a level of security adequate to the risk to which the personal information may be exposed and, in any case, in accordance with the security measures established in the law in force in each of the countries and or regions in which we operate.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018

Preamble: Our intent for the ethical use of AI in Defence

The MOD is committed to developing and deploying AI enabled systems responsibly, in ways that build trust and consensus, setting international standards for the ethical use of AI for Defence. The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values. The MOD’s existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law, act as a foundation for Defence’s development, deployment and operation of AI enabled systems. These ethical principles do not affect or supersede existing legal obligations. Instead, they set out an ethical framework which will guide Defence’s approach to adopting AI, in line with rigorous existing codes of conduct and regulations. These principles are applicable across the full spectrum of use cases for AI in Defence, from battlespace to back office, and across the entire lifecycle of these systems.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022

· Right to Privacy, and Data Protection

32. Privacy, a right essential to the protection of human dignity, human autonomy and human agency, must be respected, protected and promoted throughout the life cycle of AI systems. It is important that data for AI systems be collected, used, shared, archived and deleted in ways that are consistent with international law and in line with the values and principles set forth in this Recommendation, while respecting relevant national, regional and international legal frameworks. 33. Adequate data protection frameworks and governance mechanisms should be established in a multi stakeholder approach at the national or international level, protected by judicial systems, and ensured throughout the life cycle of AI systems. Data protection frameworks and any related mechanisms should take reference from international data protection principles and standards concerning the collection, use and disclosure of personal data and exercise of their rights by data subjects while ensuring a legitimate aim and a valid legal basis for the processing of personal data, including informed consent. 34. Algorithmic systems require adequate privacy impact assessments, which also include societal and ethical considerations of their use and an innovative use of the privacy by design approach. AI actors need to ensure that they are accountable for the design and implementation of AI systems in such a way as to ensure that personal information is protected throughout the life cycle of the AI system.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021

· Multi stakeholder and adaptive governance and collaboration

46. International law and national sovereignty must be respected in the use of data. That means that States, complying with international law, can regulate the data generated within or passing through their territories, and take measures towards effective regulation of data, including data protection, based on respect for the right to privacy in accordance with international law and other human rights norms and standards. 47. Participation of different stakeholders throughout the AI system life cycle is necessary for inclusive approaches to AI governance, enabling the benefits to be shared by all, and to contribute to sustainable development. Stakeholders include but are not limited to governments, intergovernmental organizations, the technical community, civil society, researchers and academia, media, education, policy makers, private sector companies, human rights institutions and equality bodies, anti discrimination monitoring bodies, and groups for youth and children. The adoption of open standards and interoperability to facilitate collaboration should be in place. Measures should be adopted to take into account shifts in technologies, the emergence of new groups of stakeholders, and to allow for meaningful participation by marginalized groups, communities and individuals and, where relevant, in the case of Indigenous Peoples, respect for the self governance of their data.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021