2. Data and insights belong to their creator

IBM clients’ data is their data, and their insights are their insights. Client data and the insights produced on IBM’s cloud or from IBM’s AI are owned by IBM’s clients. We believe that government data policies should be fair and equitable and prioritize openness.
Principle: Principles for Trust and Transparency, May 30, 2018

Published by IBM

Related Principles

2. Privacy Principles Privacy by Design

o We have implemented an enterprise wide Privacy by Design approach that incorporates privacy and data security into our ML and associated data processing systems. Our ML models seek to minimize access to identifiable information to ensure we are using only the personal data we need to generate insights. ADP is committed to providing individuals with a reasonable opportunity to examine their own personal data and to update it if it is incorrect.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

5. Privacy and Data Governance

AI systems should have proper mechanisms in place to ensure data privacy and protection and maintain and protect the quality and integrity of data throughout their entire lifecycle. Data protocols need to be set up to govern who can access data and when data can be accessed. Data privacy and protection should be respected and upheld during the design, development, and deployment of AI systems. The way data is collected, stored, generated, and deleted throughout the AI system lifecycle must comply with applicable data protection laws, data governance legislation, and ethical principles. Some data protection and privacy laws in ASEAN include Malaysia’s Personal Data Protection Act 2010, the Philippines’ Data Privacy Act of 2012, Singapore’s Personal Data Protection Act 2012, Thailand’s Personal Data Protection Act 2019, Indonesia’s Personal Data Protection Law 2022, and Vietnam’s Personal Data Protection Decree 2023. Organisations should be transparent about their data collection practices, including the types of data collected, how it is used, and who has access to it. Organisations should ensure that necessary consent is obtained from individuals before collecting, using, or disclosing personal data for AI development and deployment, or otherwise have appropriate legal basis to collect, use or disclose personal data without consent. Unnecessary or irrelevant data should not be gathered to prevent potential misuse. Data protection and governance frameworks should be set up and adhered to by developers and deployers of AI systems. These frameworks should also be periodically reviewed and updated in accordance with applicable privacy and data protection laws. For example, data protection impact assessments (DPIA) help organisations determine how data processing systems, procedures, or technologies affect individuals’ privacy and eliminate risks that might violate compliance7. However, it is important to note that DPIAs are much narrower in scope than an overall impact assessment for use of AI systems and are not sufficient as an AI risk assessment. Other components will need to be considered for a full assessment of risks associated with AI systems. Developers and deployers of AI systems should also incorporate a privacy by design principle when developing and deploying AI systems. Privacy by design is an approach that embeds privacy in every stage of the system development lifecycle. Data privacy is essential in gaining the public’s trust in technological advances. Another consideration is investing in privacy enhancing technologies to preserve privacy while allowing personal data to be used for innovation. Privacy enhancing technologies include, but are not limited to, differential privacy, where small changes are made to raw data to securely de identify inputs without having a significant impact on the results of the AI system, and zero knowledge proofs (ZKP), where ZKP hide the underlying data and answer simple questions about whether something is true or false without revealing additional information

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· (3) Privacy

In society premised on AI, it is possible to estimate each person’s political position, economic situation, hobbies preferences, etc. with high accuracy from data on the data subject’s personal behavior. This means, when utilizing AI, that more careful treatment of personal data is necessary than simply utilizing personal information. To ensure that people are not suffered disadvantages from unexpected sharing or utilization of personal data through the internet for instance, each stakeholder must handle personal data based on the following principles. Companies or government should not infringe individual person’s freedom, dignity and equality in utilization of personal data with AI technologies. AI that uses personal data should have a mechanism that ensures accuracy and legitimacy and enable the person herself himself to be substantially involved in the management of her his privacy data. As a result, when using the AI, people can provide personal data without concerns and effectively benefit from the data they provide. Personal data must be properly protected according to its importance and sensitivity. Personal data varies from those unjust use of which would be likely to greatly affect rights and benefits of individuals (Typically thought and creed, medical history, criminal record, etc.) to those that are semi public in social life. Taking this into consideration, we have to pay enough attention to the balance between the use and protection of personal data based on the common understanding of society and the cultural background.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI, Dec 27, 2018

2. Transparency

For cognitive systems to fulfill their world changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear: When and for what purposes AI is being applied in the cognitive solutions we develop and deploy. The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions. The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.

Published by IBM in Principles for the Cognitive Era, Jan 17, 2017

8. Principle of fairness

AI service providers, business users, and data providers should take into consideration that individuals will not be discriminated unfairly by the judgments of AI systems or AI services. [Main points to discuss] A) Attention to the representativeness of data used for learning or other methods of AI AI service providers, business users, and data providers may be expected to pay attention to the representativeness of data used for learning or other methods of AI and the social bias inherent in the data so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc. as a result of the judgment of AI. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is attention expected to be paid to the representativeness of data used for learning or other methods and the social bias inherent in the data? Note: The representativeness of data refers to the fact that data sampled and used do not distort the propensity of the population of data. B) Attention to unfair discrimination by algorithm AI service providers and business users may be expected to pay attention to the possibility that individuals may be unfairly discriminated against due to their race, religion, gender, etc. by the algorithm of AI. C) Human intervention Regarding the judgment made by AI, AI service providers and business users may be expected to make judgments as to whether to use the judgments of AI, how to use it, or other matters, with consideration of social contexts and reasonable expectations of people in the utilization of AI, so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is human intervention expected?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018