3. Accountability

The company will strive to apply the principles of social and ethical responsibility to AI system AI system will be adequately protected and have security measures to prevent data breach and cyber attacks. The company will strive to benefit the society and promote the corporate citizenship though AI system.
Principle: Principles for AI Ethics, Apr 24, 2019 (unconfirmed)

Published by Samsung

Related Principles

Chapter 1. General Principles

  1. This set of norms aims to integrate ethics into the entire life cycle of AI, to promote fairness, justice, harmony, safety and security, and to avoid issues such as prejudice, discrimination, privacy and information leakage.   2. This set of norms applies to natural persons, legal persons, and other related organizations engaged in related activities such as management, research and development, supply, and use of AI. (1) The management activities mainly refer to strategic planning, formulation and implementation of policies, laws, regulations, and technical standards, resource allocation, supervision and inspection, etc. (2) The research and development activities mainly refer to scientific research, technology development, product development, etc. related to AI. (3) The supply activities mainly refer to the production, operation, and sales of AI products and services. (4) The use activities mainly refer to the procurement, consumption, and manipulation of AI products and services.   3. Various activities of AI shall abide by the following fundamental ethical norms. (1) Enhancing the well being of humankind. Adhere to the people oriented vision, abide by the common values of humankind, respect human rights and the fundamental interests of humankind, and abide by national and regional ethical norms. Adhere to the priority of public interests, promote human machine harmony, improve people’s livelihood, enhance the sense of happiness, promote the sustainable development of economy, society and ecology, and jointly build a human community with a shared future. (2) Promoting fairness and justice. Adhere to shared benefits and inclusivity, effectively protect the legitimate rights and interests of all relevant stakeholders, promote fair sharing of the benefits of AI in the whole society, and promote social fairness and justice, and equal opportunities. When providing AI products and services, we should fully respect and help vulnerable groups and underrepresented groups, and provide corresponding alternatives as needed. (3) Protecting privacy and security. Fully respect the rights of personal information, to know, and to consent, etc., handle personal information, protect personal privacy and data security in accordance with the principles of lawfulness, justifiability, necessity, and integrity, do no harm to the legitimate rights of personal data, must not illegally collect and use personal information by stealing, tampering, or leaking, etc., and must not infringe on the rights of personal privacy. (4) Ensuring controllability and trustworthiness. Ensure that humans have the full power for decision making, the rights to choose whether to accept the services provided by AI, the rights to withdraw from the interaction with AI at any time, and the rights to suspend the operation of AI systems at any time, and ensure that AI is always under meaningful human control. (5) Strengthening accountability. Adhere that human beings are the ultimate liable subjects. Clarify the responsibilities of all relevant stakeholders, comprehensively enhance the awareness of responsibility, introspect and self discipline in the entire life cycle of AI. Establish an accountability mechanism in AI related activities, and do not evade liability reviews and do not escape from responsibilities. (6) Improving ethical literacy. Actively learn and popularize knowledge related to AI ethics, objectively understand ethical issues, and do not underestimate or exaggerate ethical risks. Actively carry out or participate in the discussions on the ethical issues of AI, deeply promote the practice of AI ethics and governance, and improve the ability to respond to related issues.   4. The ethical norms that should be followed in specific activities related to AI include the norms of management, the norms of research and development, the norms of supply, and the norms of use.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· Build and Validate:

1 Privacy and security by design should be implemented while building the AI system. The security mechanisms should include the protection of various architectural dimensions of an AI model from malicious attacks. The structure and modules of the AI system should be protected from unauthorized modification or damage to any of its components. 2 The AI system should be secure to ensure and maintain the integrity of the information it processes. This ensures that the system remains continuously functional and accessible to authorized users. It is crucial that the system safeguards confidential and private information, even under hostile or adversarial conditions. Furthermore, appropriate measures should be in place to ensure that AI systems with automated decision making capabilities uphold the necessary data privacy and security standards. 3 The AI System should be tested to ensure that the combination of available data does not reveal the sensitive data or break the anonymity of the observation. Deploy and Monitor: 1 After the deployment of the AI system, when its outcomes are realized, there must be continuous monitoring to ensure that the AI system is privacy preserving, safe and secure. The privacy impact assessment and risk management assessment should be continuously revisited to ensure that societal and ethical considerations are regularly evaluated. 2 AI System Owners should be accountable for the design and implementation of AI systems in such a way as to ensure that personal information is protected throughout the life cycle of the AI system. The components of the AI system should be updated based on continuous monitoring and privacy impact assessment.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

Principle 7 – Accountability & Responsibility

The accountability and responsibility principle holds designers, vendors, procurers, developers, owners and assessors of AI systems and the technology itself ethically responsible and liable for the decisions and actions that may result in potential risk and negative effects on individuals and communities. Human oversight, governance, and proper management should be demonstrated across the entire AI System Lifecycle to ensure that proper mechanisms are in place to avoid harm and misuse of this technology. AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice. The designers, developers, and people who implement the AI system should be identifiable and assume responsibility and accountability for any potential damage the technology has on individuals or communities, even if the adverse impact is unintended. The liable parties should take necessary preventive actions as well as set risk assessment and mitigation strategy to minimize the harm due to the AI system. The accountability and responsibility principle is closely related to the fairness principle. The parties responsible for the AI system should ensure that the fairness of the system is maintained and sustained through control mechanisms. All parties involved in the AI System Lifecycle should consider and action these values in their decisions and execution.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

Plan and Design:

1 This step is crucial to design or procure an AI System in an accountable and responsible manner. The ethical responsibility and liability for the outcomes of the AI system should be attributable to stakeholders who are responsible for certain actions in the AI System Lifecycle. It is essential to set a robust governance structure that defines the authorization and responsibility areas of the internal and external stakeholders without leaving any areas of uncertainty to achieve this principle. The design approach of the AI system should respect human rights, and fundamental freedoms as well as the national laws and cultural values of the kingdom. 2 Organizations can put in place additional instruments such as impact assessments, risk mitigation frameworks, audit and due diligence mechanisms, redress, and disaster recovery plans. 3 It is essential to build and design a human controlled AI system where decisions on the processes and functionality of the technology are monitored and executed, and are susceptible to intervention from authorized users. Human governance and oversight establish the necessary control and levels of autonomy through set mechanisms.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022