Article 7: Protect privacy.

Adhere to the principles of legality, legitimacy, and necessity when collecting and using personal information. Strengthen privacy protection for special data subjects such as minors. Strengthen technical methods, ensure data security, and be on guard against risks such as data leaks.
Principle: Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment), May 31, 2019

Published by Artificial Intelligence Industry Alliance (AIIA), China

Related Principles

Privacy protection and security

Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data. This principle aims to ensure respect for privacy and data protection when using AI systems. This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. For example, maintaining privacy through appropriate data anonymisation where used by AI systems. Further, the connection between data, and inferences drawn from that data by AI systems, should be sound and assessed in an ongoing manner. This principle also aims to ensure appropriate data and AI system security measures are in place. This includes the identification of potential security vulnerabilities, and assurance of resilience to adversarial attacks. Security measures should account for unintended applications of AI systems, and potential abuse risks, with appropriate mitigation measures.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

Privacy and security

Big data collection and AI must comply with laws that regulate privacy and data collection, use and storage. AI data and algorithms must be protected against theft, and employers or AI providers need to inform employees, customers and partners of any breach of information, in particular PII, as soon as possible.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

2.4 Cybersecurity and Privacy

Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de identification, or aggregation to protect personally identifiable information whenever possible.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

• Rethink Privacy

Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology. [Recommendations] • Adopt Robust Privacy Laws: Based on the OECD Fair Information Practice Principles. • Implement Privacy by Design: Follow Intel’s Rethinking Privacy approach to implement Privacy by Design into AI product and project development. • Keep data secure: Policies should help enable cutting edge AI technology with robust cyber and physical security to mitigate risks of attacks and promote trust from society. • It takes data for AI to protect data: Governments should adopt policies to reduce barriers to the sharing of data for cybersecurity purposes.

Published by Intel in AI public policy principles, Oct 18, 2017

6. Principle of privacy

Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties. [Comment] The privacy referred to in this principle includes spatial privacy (peace of personal life), information privacy (personal data), and secrecy of communications. Developers should consider international guidelines on privacy, such as “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” as well as the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning and other methods: ● To make efforts to evaluate the risks of privacy infringement and conduct privacy impact assessment in advance. ● To make efforts to take necessary measures, to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of development of the AI systems (“privacy by design”), to avoid infringement of privacy at the time of the utilization.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017