4. Fairness Obligation.

Publisher: The Public Voice coalition, established by Electronic Privacy Information Center (EPIC)

Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions. [Explanatory Memorandum] The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair. There is no simple answer to the question as to what is unfair or impermissible. The evaluation often depends on context. But the Fairness Obligation makes clear that an assessment of objective outcomes alone is not sufficient to evaluate an AI system. Normative consequences must be assessed, including those that preexist or may be amplified by an AI system.