3. New technology, including AI systems, must be transparent and explainable
For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.
AI systems should be understandable.
Does the development of AI put critical thinking at risk?
How do we minimize the dissemination of fake news or misleading information?
Should research results on AI, whether positive or negative, be made available and accessible?
Is it acceptable not to be informed that medical or legal advice has been given by a chatbot?
How transparent should the internal decision making processes of algorithms be?
The development of AI should promote critical thinking and protect us from propaganda and manipulation.
We will make AI systems fair
1. Data ingested should, where possible, be representative of the affected population
2. Algorithms should avoid non operational bias
3. Steps should be taken to mitigate and disclose the biases inherent in datasets
4. Significant decisions should be provably fair
4. Fairness Obligation.
Published by: The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence
Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair. There is no simple answer to the question as to what is unfair or impermissible. The evaluation often depends on context. But the Fairness Obligation makes clear that an assessment of objective outcomes alone is not sufficient to evaluate an AI system. Normative consequences must be assessed, including those that preexist or may be amplified by an AI system.