Privacy and security
Big data collection and AI must comply with laws that regulate privacy and data collection, use and storage. AI data and algorithms must be protected against theft, and employers or AI providers need to inform employees, customers and partners of any breach of information, in particular PII, as soon as possible.
2.4 Cybersecurity and Privacy
Published by: Information Technology Industry Council (ITI) in AI Policy Principles
Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de identification, or aggregation to protect personally identifiable information whenever possible.
• Rethink Privacy
Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology.
• Adopt Robust Privacy Laws: Based on the OECD Fair Information Practice Principles.
• Implement Privacy by Design: Follow Intel’s Rethinking Privacy approach to implement Privacy by Design into AI product and project development.
• Keep data secure: Policies should help enable cutting edge AI technology with robust cyber and physical security to mitigate risks of attacks and promote trust from society.
• It takes data for AI to protect data: Governments should adopt policies to reduce
barriers to the sharing of data for cybersecurity purposes.
6. Principle of privacy
Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles
Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties.
The privacy referred to in this principle includes spatial privacy (peace of personal life), information privacy (personal data), and secrecy of communications. Developers should consider international guidelines on privacy, such as “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” as well as the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning and other methods:
● To make efforts to evaluate the risks of privacy infringement and conduct privacy impact assessment in advance.
● To make efforts to take necessary measures, to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of development of the AI systems (“privacy by design”), to avoid infringement of privacy at the time of the utilization.
4. Respect for Privacy
AI development should respect and protect the privacy of individuals and fully protect an individual’s rights to know and to choose. Boundaries and rules should be established for the collection, storage, processing and use of personal information. Personal privacy authorization and revocation mechanisms should be established and updated. Stealing, juggling, leaking and other forms of illegal collection and use of personal information should be strictly prohibited.