1. Transparency and Explainability
In the past, AI algorithms have been found to discriminate against female job applicants and have failed to accurately recognise the faces of dark skinned women.
1. Transparency and Explainability
Some practices to demonstrate repeatability include conducting repeatability assessments to ensure deployments in live environments are repeatable and performing counterfactual fairness testing to ensure that the AI system’s decisions are the same in both the real world and in the counterfactual world.
2. Fairness and Equity
fairness and Equity
2. Fairness and Equity
Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination.
2. Fairness and Equity
Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination.
2. Fairness and Equity
Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination.
2. Fairness and Equity
Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination.
2. Fairness and Equity
Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity.
2. Fairness and Equity
Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity.
2. Fairness and Equity
If not properly managed, an AI system’s outputs used to make decisions with significant impact on individuals could perpetuate existing discriminatory or unjust impacts to specific demographics.
2. Fairness and Equity
If not properly managed, an AI system’s outputs used to make decisions with significant impact on individuals could perpetuate existing discriminatory or unjust impacts to specific demographics.
2. Fairness and Equity
To mitigate discrimination, it is important that the design, development, and deployment of AI systems align with fairness and equity principles.
2. Fairness and Equity
To mitigate discrimination, it is important that the design, development, and deployment of AI systems align with fairness and equity principles.
2. Fairness and Equity
Appropriate measures should be taken to mitigate potential biases during data collection and pre processing, training, and inference.