· Deploy and Monitor:

1 Periodic assessments of the deployed AI system should be conducted to ensure that its results are aligned with human rights and cultural values, accuracy key performance indicators (KPIs), and impact on individuals or communities to ensure the continuous improvement of the technology. 2 Designers of AI models should establish mechanisms of assessing AI systems against fundamental human rights and cultural values to mitigate any negative and harmful outcomes resulting from the use of the AI system. If any negative and harmful outcomes are found, the owner of the AI system should identify the areas that need to be addressed and apply corrective measures to recursively improve the functioning and outcomes of the AI system.
Principle: AI Ethics Principles, Sept 14, 2022

Published by SDAIA

Related Principles

2. Fairness and Equity

Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination. An example of such safeguards would include human interventions and checks on the algorithms and its outputs. Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity. With the rapid developments in the AI space, AI systems are increasingly used to aid decision making. For example, AI systems are currently used to screen resumes in job application processes, predict the credit worthiness of consumers and provide agronomic advice to farmers. If not properly managed, an AI system’s outputs used to make decisions with significant impact on individuals could perpetuate existing discriminatory or unjust impacts to specific demographics. To mitigate discrimination, it is important that the design, development, and deployment of AI systems align with fairness and equity principles. In addition, the datasets used to train the AI systems should be diverse and representative. Appropriate measures should be taken to mitigate potential biases during data collection and pre processing, training, and inference. For example, thetraining and test dataset for an AI system used in the education sector should be adequately representative of the student population by including students of different genders and ethnicities.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· Build and Validate:

1 Privacy and security by design should be implemented while building the AI system. The security mechanisms should include the protection of various architectural dimensions of an AI model from malicious attacks. The structure and modules of the AI system should be protected from unauthorized modification or damage to any of its components. 2 The AI system should be secure to ensure and maintain the integrity of the information it processes. This ensures that the system remains continuously functional and accessible to authorized users. It is crucial that the system safeguards confidential and private information, even under hostile or adversarial conditions. Furthermore, appropriate measures should be in place to ensure that AI systems with automated decision making capabilities uphold the necessary data privacy and security standards. 3 The AI System should be tested to ensure that the combination of available data does not reveal the sensitive data or break the anonymity of the observation. Deploy and Monitor: 1 After the deployment of the AI system, when its outcomes are realized, there must be continuous monitoring to ensure that the AI system is privacy preserving, safe and secure. The privacy impact assessment and risk management assessment should be continuously revisited to ensure that societal and ethical considerations are regularly evaluated. 2 AI System Owners should be accountable for the design and implementation of AI systems in such a way as to ensure that personal information is protected throughout the life cycle of the AI system. The components of the AI system should be updated based on continuous monitoring and privacy impact assessment.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Deploy and Monitor:

1 Monitoring the robustness of the AI system should be adopted and undertaken in a periodic and continuous manner to measure and assess any risks related to the technicalities of the AI system (an inward perspective) as well as the magnitude of the risk posed by the system and its capabilities (an outward perspective). 2 The model must also be monitored in a periodic and continuous manner to verify whether its operations and functions are compatible with the designed structure and frameworks. The AI system must also be safe to prevent destructive use to exploit its data and results to harm entities, individuals, or groups. It is necessary to continuously work on implementation and development to ensure system reliability.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Deploy and Monitor:

1 Upon deployment of the AI system, performance metrics relating the AI system’s output, accuracy and alignment to priorities and objectives, as well as its measured impact on individuals and communities should be documented, available and accessible to stakeholders of the AI technology. 2 Information on any system failures, data breaches, system breakdowns, etc. should be logged and stakeholders should be informed about these instances keeping the performance and execution of the AI system transparent. Periodic UI and UX testing should be conducted to avoid the risk of confusion, confirmation of biases, or cognitive fatigue of the AI system.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022