• Liberate Data Responsibly

AI is powered by access to data. Machine learning algorithms improve by analyzing more data over time; data access is imperative to achieve more enhanced AI model development and training. Removing barriers to the access of data will help machine learning and deep learning reach their full potential. [Recommendations] • Keep data moving: Governments should eliminate unwarranted data localization mandates and enable secure international data transfers through international agreements and legal tools. • Open public data: While protecting privacy, governments should make useful datasets publicly available when appropriate and provide guidance to startups and small and medium businesses for its reuse. • Support the creation of reliable data sets to test algorithms: Governments should explore non regulatory methods to encourage the development of testing data sets. • Federate access to data: Governments should partner with industry to promote AI tools to access encrypted data for analysis, while not requiring transfer of the data. (Note: Instead of centralizing data from several institutions, federated access to data allows each institution to keep control of their data while enabling joint data analytics across all institutions.)
Principle: AI public policy principles, Oct 18, 2017

Published by Intel

Related Principles

Privacy protection and security

Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data. This principle aims to ensure respect for privacy and data protection when using AI systems. This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. For example, maintaining privacy through appropriate data anonymisation where used by AI systems. Further, the connection between data, and inferences drawn from that data by AI systems, should be sound and assessed in an ongoing manner. This principle also aims to ensure appropriate data and AI system security measures are in place. This includes the identification of potential security vulnerabilities, and assurance of resilience to adversarial attacks. Security measures should account for unintended applications of AI systems, and potential abuse risks, with appropriate mitigation measures.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

1.3 Robust and Representative Data

To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems. AI systems need to leverage large datasets, and the availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

2.3 Promoting Innovation and the Security of the Internet

We strongly support the protection of the foundation of AI, including source code, proprietary algorithms, and other intellectual property. To this end, we believe governments should avoid requiring companies to transfer or provide access to technology, source code, algorithms, or encryption keys as conditions for doing business. We support the use of all available tools, including trade agreements, to achieve these ends.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

2.4 Cybersecurity and Privacy

Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de identification, or aggregation to protect personally identifiable information whenever possible.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

• Rethink Privacy

Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology. [Recommendations] • Adopt Robust Privacy Laws: Based on the OECD Fair Information Practice Principles. • Implement Privacy by Design: Follow Intel’s Rethinking Privacy approach to implement Privacy by Design into AI product and project development. • Keep data secure: Policies should help enable cutting edge AI technology with robust cyber and physical security to mitigate risks of attacks and promote trust from society. • It takes data for AI to protect data: Governments should adopt policies to reduce barriers to the sharing of data for cybersecurity purposes.

Published by Intel in AI public policy principles, Oct 18, 2017