The principle "ASEAN Guide on AI Governance and Ethics" has mentioned the topic "test" in the following places:

    1. Transparency and Explainability

    Some practices to demonstrate repeatability include conducting repeatability assessments to ensure deployments in live environments are repeatable and performing counterfactual fairness testing to ensure that the AI system’s decisions are the same in both the real world and in the counterfactual world.

    2. Fairness and Equity

    Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity.

    2. Fairness and Equity

    For example, thetraining and test dataset for an AI system used in the education sector should be adequately representative of the student population by including students of different genders and ethnicities.

    3. Security and Safety

    Before deploying AI systems, deployers should conduct risk assessments and relevant testing or certification and implement the appropriate level of human intervention to prevent harm when unsafe decisions take place.

    3. Security and Safety

    It is also important for deployers to make a minimum list of security testing (e.g.

    3. Security and Safety

    vulnerability assessment and penetration testing) and other applicable security testing tools.

    3. Security and Safety

    vulnerability assessment and penetration testing) and other applicable security testing tools.

    4. Human centricity

    One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system.

    7. Robustness and Reliability

    Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across a range of situations and environments.