F. Bias Mitigation:

Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.
Principle: NATO Principles of Responsible Use of Artificial Intelligence in Defence, Oct 22, 2021

Published by The North Atlantic Treaty Organization (NATO)

Related Principles

2. Fairness and Equity

Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination. An example of such safeguards would include human interventions and checks on the algorithms and its outputs. Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity. With the rapid developments in the AI space, AI systems are increasingly used to aid decision making. For example, AI systems are currently used to screen resumes in job application processes, predict the credit worthiness of consumers and provide agronomic advice to farmers. If not properly managed, an AI system’s outputs used to make decisions with significant impact on individuals could perpetuate existing discriminatory or unjust impacts to specific demographics. To mitigate discrimination, it is important that the design, development, and deployment of AI systems align with fairness and equity principles. In addition, the datasets used to train the AI systems should be diverse and representative. Appropriate measures should be taken to mitigate potential biases during data collection and pre processing, training, and inference. For example, thetraining and test dataset for an AI system used in the education sector should be adequately representative of the student population by including students of different genders and ethnicities.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· Plan and Design:

The fairness principle requires taking necessary actions to eliminate bias, discrimination or stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems. Bias may occur due to data, representation or algorithms and could lead to discrimination against the historically disadvantaged groups. When designing, selecting, and developing AI systems, it is essential to ensure just, fair,non biased, non discriminatory and objective standards that are inclusive, diverse, andrepresentative of all or targeted segments of society. The functionality of an AI system shouldnot be limited to a specific group based on gender, race, religion, disability, age, or sexualorientation. In addition, the potential risks, overall benefits, and purpose of utilizing sensitivepersonal data should be well motivated and defined or articulated by the AI System Owner. To ensure consistent AI systems that are based on fairness and inclusiveness, AI systems shouldbe trained on data that are cleansed from bias and is representative of affected minority groups.Al algorithms should be built and developed in a manner that makes their composition free frombias and correlation fallacy.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Prepare Input Data:

1 Following the best practice of responsible data acquisition, handling, classification, and management must be a priority to ensure that results and outcomes align with the AI system’s set goals and objectives. Effective data quality soundness and procurement begin by ensuring the integrity of the data source and data accuracy in representing all observations to avoid the systematic disadvantaging of under represented or advantaging over represented groups. The quantity and quality of the data sets should be sufficient and accurate to serve the purpose of the system. The sample size of the data collected or procured has a significant impact on the accuracy and fairness of the outputs of a trained model. 2 Sensitive personal data attributes which are defined in the plan and design phase should not be included in the model data not to feed the existing bias on them. Also, the proxies of the sensitive features should be analyzed and not included in the input data. In some cases, this may not be possible due to the accuracy or objective of the AI system. In this case, the justification of the usage of the sensitive personal data attributes or their proxies should be provided. 3 Causality based feature selection should be ensured. Selected features should be verified with business owners and non technical teams. 4 Automated decision support technologies present major risks of bias and unwanted application at the deployment phase, so it is critical to set out mechanisms to prevent harmful and discriminatory results at this phase.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Deploy and Monitor:

1 Periodic assessments of the deployed AI system should be conducted to ensure that its results are aligned with human rights and cultural values, accuracy key performance indicators (KPIs), and impact on individuals or communities to ensure the continuous improvement of the technology. 2 Designers of AI models should establish mechanisms of assessing AI systems against fundamental human rights and cultural values to mitigate any negative and harmful outcomes resulting from the use of the AI system. If any negative and harmful outcomes are found, the owner of the AI system should identify the areas that need to be addressed and apply corrective measures to recursively improve the functioning and outcomes of the AI system.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022