2. Privacy Principles Privacy by Design

o We have implemented an enterprise wide Privacy by Design approach that incorporates privacy and data security into our ML and associated data processing systems. Our ML models seek to minimize access to identifiable information to ensure we are using only the personal data we need to generate insights. ADP is committed to providing individuals with a reasonable opportunity to examine their own personal data and to update it if it is incorrect.
Principle: ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

Published by ADP

Related Principles

5. Privacy and Data Governance

AI systems should have proper mechanisms in place to ensure data privacy and protection and maintain and protect the quality and integrity of data throughout their entire lifecycle. Data protocols need to be set up to govern who can access data and when data can be accessed. Data privacy and protection should be respected and upheld during the design, development, and deployment of AI systems. The way data is collected, stored, generated, and deleted throughout the AI system lifecycle must comply with applicable data protection laws, data governance legislation, and ethical principles. Some data protection and privacy laws in ASEAN include Malaysia’s Personal Data Protection Act 2010, the Philippines’ Data Privacy Act of 2012, Singapore’s Personal Data Protection Act 2012, Thailand’s Personal Data Protection Act 2019, Indonesia’s Personal Data Protection Law 2022, and Vietnam’s Personal Data Protection Decree 2023. Organisations should be transparent about their data collection practices, including the types of data collected, how it is used, and who has access to it. Organisations should ensure that necessary consent is obtained from individuals before collecting, using, or disclosing personal data for AI development and deployment, or otherwise have appropriate legal basis to collect, use or disclose personal data without consent. Unnecessary or irrelevant data should not be gathered to prevent potential misuse. Data protection and governance frameworks should be set up and adhered to by developers and deployers of AI systems. These frameworks should also be periodically reviewed and updated in accordance with applicable privacy and data protection laws. For example, data protection impact assessments (DPIA) help organisations determine how data processing systems, procedures, or technologies affect individuals’ privacy and eliminate risks that might violate compliance7. However, it is important to note that DPIAs are much narrower in scope than an overall impact assessment for use of AI systems and are not sufficient as an AI risk assessment. Other components will need to be considered for a full assessment of risks associated with AI systems. Developers and deployers of AI systems should also incorporate a privacy by design principle when developing and deploying AI systems. Privacy by design is an approach that embeds privacy in every stage of the system development lifecycle. Data privacy is essential in gaining the public’s trust in technological advances. Another consideration is investing in privacy enhancing technologies to preserve privacy while allowing personal data to be used for innovation. Privacy enhancing technologies include, but are not limited to, differential privacy, where small changes are made to raw data to securely de identify inputs without having a significant impact on the results of the AI system, and zero knowledge proofs (ZKP), where ZKP hide the underlying data and answer simple questions about whether something is true or false without revealing additional information

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

5. We are secure.

Data security is a prime quality of Deutsche Telekom. In order to maintain this asset, we ensure that our security measures are up to date while having a full overview of how customer related data is used and who has access to which kind of data. We never process privacy relevant data without legal permission. This policy applies to our AI systems just as much as it does to all of our activities. Additionally, we limit the usage to appropriate use cases and thoroughly secure our systems to obstruct external access and ensure data privacy.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

· 2.4 Cybersecurity and Privacy

Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de identification, or aggregation to protect personally identifiable information whenever possible.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

Design for human control, accountability, and intended use

Humans should have ultimate control of our technology, and we strive to prevent unintended use of our products. Our user experience enforces accountability, responsible use, and transparency of consequences. We build protections into our products to detect and avoid unintended system behaviors. We achieve this through modern software engineering and rigorous testing on our entire systems including their constituent data and AI products, in isolation and in concert. Additionally, we rely on ongoing user research to help ensure that our products function as expected and can be appropriately disabled when necessary. Accountability is enforced by providing customers with insight into the provenance of data sources, methodologies, and design processes in easily understood and transparent language. Effective governance — of data, models, and software — is foundational to the ethical and accountable deployment of AI.

Published by Rebelliondefense in AI Ethical Principles, January 2023

Encode privacy into technology

We encode privacy protections and adhere to the principle of least privilege in our products, so that users only have access to data that they absolutely need to complete their specific task. We treat misuse and violations as product failure. Compliance with the applicable legal frameworks governing privacy is a basic tenet that guides our product development.

Published by Rebelliondefense in AI Ethical Principles, January 2023