· 1) Privacy:

Stipulate relevant laws and regulations, verify user ownership, strengthen the awareness of privacy protection, develop algorithms and technologies of privacy protection, collect and use data in an open, transparent and lawful manner.
Principle: Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019

Published by Youth Work Committee of Shanghai Computer Society

Related Principles

5. Privacy and Data Governance

AI systems should have proper mechanisms in place to ensure data privacy and protection and maintain and protect the quality and integrity of data throughout their entire lifecycle. Data protocols need to be set up to govern who can access data and when data can be accessed. Data privacy and protection should be respected and upheld during the design, development, and deployment of AI systems. The way data is collected, stored, generated, and deleted throughout the AI system lifecycle must comply with applicable data protection laws, data governance legislation, and ethical principles. Some data protection and privacy laws in ASEAN include Malaysia’s Personal Data Protection Act 2010, the Philippines’ Data Privacy Act of 2012, Singapore’s Personal Data Protection Act 2012, Thailand’s Personal Data Protection Act 2019, Indonesia’s Personal Data Protection Law 2022, and Vietnam’s Personal Data Protection Decree 2023. Organisations should be transparent about their data collection practices, including the types of data collected, how it is used, and who has access to it. Organisations should ensure that necessary consent is obtained from individuals before collecting, using, or disclosing personal data for AI development and deployment, or otherwise have appropriate legal basis to collect, use or disclose personal data without consent. Unnecessary or irrelevant data should not be gathered to prevent potential misuse. Data protection and governance frameworks should be set up and adhered to by developers and deployers of AI systems. These frameworks should also be periodically reviewed and updated in accordance with applicable privacy and data protection laws. For example, data protection impact assessments (DPIA) help organisations determine how data processing systems, procedures, or technologies affect individuals’ privacy and eliminate risks that might violate compliance7. However, it is important to note that DPIAs are much narrower in scope than an overall impact assessment for use of AI systems and are not sufficient as an AI risk assessment. Other components will need to be considered for a full assessment of risks associated with AI systems. Developers and deployers of AI systems should also incorporate a privacy by design principle when developing and deploying AI systems. Privacy by design is an approach that embeds privacy in every stage of the system development lifecycle. Data privacy is essential in gaining the public’s trust in technological advances. Another consideration is investing in privacy enhancing technologies to preserve privacy while allowing personal data to be used for innovation. Privacy enhancing technologies include, but are not limited to, differential privacy, where small changes are made to raw data to securely de identify inputs without having a significant impact on the results of the AI system, and zero knowledge proofs (ZKP), where ZKP hide the underlying data and answer simple questions about whether something is true or false without revealing additional information

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· 5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

7 Privacy

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall endeavour to ensure that AI systems are compliant with privacy norms and regulations, taking into account the unique characteristics of AI systems, and the evolution of standards on privacy.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

• Rethink Privacy

Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology. [Recommendations] • Adopt Robust Privacy Laws: Based on the OECD Fair Information Practice Principles. • Implement Privacy by Design: Follow Intel’s Rethinking Privacy approach to implement Privacy by Design into AI product and project development. • Keep data secure: Policies should help enable cutting edge AI technology with robust cyber and physical security to mitigate risks of attacks and promote trust from society. • It takes data for AI to protect data: Governments should adopt policies to reduce barriers to the sharing of data for cybersecurity purposes.

Published by Intel in AI public policy principles, Oct 18, 2017