· Deploy and Monitor:

1 The responsibility and associated liability in the Deploy and Monitor step should be set clearly. The outcomes and decisions set in the build and validate step should be monitored continuously and should result in periodic performance reports. 2 Predefined triggers alerts should be defined for this step on the data and performance metrics. Setting these triggers is a rigorous process and each trigger should be assigned to the appropriate stakeholder. These triggers alerts can be defined as part of the risk mitigation or disaster recovery procedure and may need human oversight.
Principle: AI Ethics Principles, Sept 14, 2022

Published by SDAIA

Related Principles

3. Safe

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed. Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle. Why it matters Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed. Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

· Plan and Design:

1 At the initial stages of setting out the purpose of the AI system, the design team shallcollaborate to pinpoint the objectives and how to reach them in an efficient and optimizedmanner. Planning the design of the AI system is an essential stage to translate the system’sintended goals and outcomes. During this phase, it is important to implement a fairness awaredesign that takes appropriate precautions across the AI system algorithm, processes, andmechanisms to prevent biases from having a discriminatory effect or lead to skewed andunwanted results or outcomes. 2 Fairness aware design should start at the beginning of the AI System Lifecycle with a collaborative effort from technical and non technical members to identify potential harm andbenefits, affected individuals and vulnerable groups and evaluate how they are impacted bythe results and whether the impact is justifiable given the general purpose of the AI system. 3 A fairness assessment of the AI system is crucial, and the metrics should be selected at this stage of the AI System Lifecycle. The metrics should be chosen based on the algorithm type (rule based, classification, regression, etc.), the effect of the decision (punitive, selective, etc.), and the harm and benefit on correctly and incorrectly predicted samples. 4 Sensitive personal data attributes relating to persons or groups which are systematically or historically disadvantaged should be identified and defined at this stage. The allowed threshold which makes the assessment fair or unfair should be defined. The fairness assessment metrics to be applied to sensitive features should be measured during future steps.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Prepare Input Data:

1 Following the best practice of responsible data acquisition, handling, classification, and management must be a priority to ensure that results and outcomes align with the AI system’s set goals and objectives. Effective data quality soundness and procurement begin by ensuring the integrity of the data source and data accuracy in representing all observations to avoid the systematic disadvantaging of under represented or advantaging over represented groups. The quantity and quality of the data sets should be sufficient and accurate to serve the purpose of the system. The sample size of the data collected or procured has a significant impact on the accuracy and fairness of the outputs of a trained model. 2 Sensitive personal data attributes which are defined in the plan and design phase should not be included in the model data not to feed the existing bias on them. Also, the proxies of the sensitive features should be analyzed and not included in the input data. In some cases, this may not be possible due to the accuracy or objective of the AI system. In this case, the justification of the usage of the sensitive personal data attributes or their proxies should be provided. 3 Causality based feature selection should be ensured. Selected features should be verified with business owners and non technical teams. 4 Automated decision support technologies present major risks of bias and unwanted application at the deployment phase, so it is critical to set out mechanisms to prevent harmful and discriminatory results at this phase.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Build and Validate:

1 At the build and validate stage of the AI System Lifecycle, it is essential to take into consideration implementation fairness as a common theme when building, testing, and implementing the AI system. Model building and feature selection will require engineers and designers to be aware that the choices made about grouping or separating and including or excluding features as well as more general judgments about the reliability and security of the total set of features may have significant consequences for vulnerable or protected groups. 2 During the selection of the champion model, the fairness metric assessment should be considered. The champion model fairness metrics should be within the defined threshold for the sensitive features. The optimization approach of fairness and performance metrics should be clearly set throughout this phase. The fairness assessment should be justified if the champion model does not pass the assessment. Deploy and Monitor: 1 Well defined mechanisms and protocols should be set in place when deploying the AI system to measure the fairness and performance of the outcomes and how it impacts individuals and communities. When analyzing the outcomes of the predictive model, it should be assessed if represented groups in the data sample receive benefits in equal or similar portions and if the AI system disproportionately harms specific members based on demographic differences to ensure outcome fairness. 2 The predefined fairness metrics should be monitored in production. If there is any deviation from the allowed threshold, it should be investigated whether there is a need to renew the model. 3 The overall harm and benefit of the system should be quantified and materialized on the sensitive groups.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Prepare Input Data:

1 An important aspect of the Accountability and Responsibility principle during Prepare Input Data step in the AI System Lifecycle is data quality as it affects the outcome of the AI model and decisions accordingly. It is, therefore, important to do necessary data quality checks, clean data and ensure the integrity of the data in order to get accurate results and capture intended behavior in supervised and unsupervised models. 2 Data sets should be approved and signed off before commencing with developing the AI model. Furthermore, the data should be cleansed from societal biases. In parallel with the fairness principle, the sensitive features should not be included in the model data. In the event that sensitive features need to be included, the rationale or trade off behind the decision for such inclusion should be clearly explained. The data preparation process and data quality checks should be documented and validated by responsible parties. 3 The documentation of the process is necessary for auditing and risk mitigation. Data must be properly acquired, classified, processed, and accessible to ease human intervention and control at later stages when needed.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022