1. Transparency and Explainability
Some practices to demonstrate repeatability include conducting repeatability assessments to ensure deployments in live environments are repeatable and performing counterfactual fairness testing to ensure that the AI system’s decisions are the same in both the real world and in the counterfactual world.
2. Fairness and Equity
Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity.
2. Fairness and Equity
For example, thetraining and test dataset for an AI system used in the education sector should be adequately representative of the student population by including students of different genders and ethnicities.
3. Security and Safety
Security and safety
3. Security and Safety
AI systems should be safe and sufficiently secure against malicious attacks.
3. Security and Safety
safety refers to ensuring the safety of developers, deployers, and users of AI systems by conducting impact or risk assessments and ensuring that known risks have been identified and mitigated.
3. Security and Safety
safety refers to ensuring the safety of developers, deployers, and users of AI systems by conducting impact or risk assessments and ensuring that known risks have been identified and mitigated.
3. Security and Safety
A risk prevention approach should be adopted, and precautions should be put in place so that humans can intervene to prevent harm, or the system can safely disengage itself in the event an AI system makes unsafe decisions autonomous vehicles that cause injury to pedestrians are an illustration of this.
3. Security and Safety
Ensuring that AI systems are safe is essential to fostering public trust in AI.
3. Security and Safety
safety of the public and the users of AI systems should be of utmost priority in the decision making process of AI systems and risks should be assessed and mitigated to the best extent possible.
3. Security and Safety
Before deploying AI systems, deployers should conduct risk assessments and relevant testing or certification and implement the appropriate level of human intervention to prevent harm when unsafe decisions take place.
3. Security and Safety
It is also important for deployers to make a minimum list of security testing (e.g.
3. Security and Safety
vulnerability assessment and penetration testing) and other applicable security testing tools.
3. Security and Safety
vulnerability assessment and penetration testing) and other applicable security testing tools.
4. Human centricity
One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system.
7. Robustness and Reliability
Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across a range of situations and environments.