1 Protect autonomy
Human oversight may depend on the risks associated with an AI system but should always be meaningful and should thus include effective, transparent monitoring of human values and moral considerations.
3 Ensure transparency, explainability and intelligibility
3 Ensure transparency, explainability and intelligibility
3 Ensure transparency, explainability and intelligibility
3 Ensure transparency, explainability and intelligibility
3 Ensure transparency, explainability and intelligibility
3 Ensure transparency, explainability and intelligibility
3 Ensure transparency, explainability and intelligibility
AI should be intelligible or understandable to developers, users and regulators.
3 Ensure transparency, explainability and intelligibility
Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology.
3 Ensure transparency, explainability and intelligibility
Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology.
3 Ensure transparency, explainability and intelligibility
Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology.
3 Ensure transparency, explainability and intelligibility
transparency requires that sufficient information (described below) be published or documented before the design and deployment of an AI technology.
3 Ensure transparency, explainability and intelligibility
transparency will improve system quality and protect patient and public health safety.
3 Ensure transparency, explainability and intelligibility
For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight.
3 Ensure transparency, explainability and intelligibility
For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight.
3 Ensure transparency, explainability and intelligibility
It must be possible to audit an AI technology, including if something goes wrong.
3 Ensure transparency, explainability and intelligibility
transparency should include accurate information about the assumptions and limitations of the technology, operating protocols, the properties of the data (including methods of data collection, processing and labelling) and development of the algorithmic model.
3 Ensure transparency, explainability and intelligibility
AI technologies should be explainable to the extent possible and according to the capacity of those to whom the explanation is directed.
3 Ensure transparency, explainability and intelligibility
Data protection laws already create specific obligations of explainability for automated decision making.
3 Ensure transparency, explainability and intelligibility
There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability).
3 Ensure transparency, explainability and intelligibility
There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability).
3 Ensure transparency, explainability and intelligibility
Tests and evaluations should be regular, transparent and of sufficient breadth to cover differences in the performance of the algorithm according to race, ethnicity, gender, age and other relevant human characteristics.
3 Ensure transparency, explainability and intelligibility
Health care institutions, health systems and public health agencies should regularly publish information about how decisions have been made for adoption of an AI technology and how the technology will be evaluated periodically, its uses, its known limitations and the role of decision making, which can facilitate external auditing and oversight.
4 Foster responsibility and accountability
Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance; this helps to ensure that health care providers can use an AI technology responsibly.
4 Foster responsibility and accountability
Institutions have not only legal liability but also a duty to assume responsibility for decisions made by the algorithms they use, even if it is not feasible to explain in detail how the algorithms produce their results.
6 Promote artificial intelligence that is responsive and sustainable
Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used.