Published by: Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles
When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
This principle aims to ensure the provision of efficient, accessible mechanisms that allow people to challenge the use or output of an AI system, when that AI system significantly impacts a person, community, group or environment. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question.
Knowing that redress for harm is possible, when things go wrong, is key to ensuring public trust in AI. Particular attention should be paid to vulnerable persons or groups.
There should be sufficient access to the information available to the algorithm, and inferences drawn, to make contestability effective. In the case of decisions significantly affecting rights, there should be an effective system of oversight, which makes appropriate use of human judgment.
Fairness and inclusion
AI systems should make the same recommendations for everyone with similar characteristics or qualifications. Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on. AI should adopt inclusive design efforts to anticipate any potential deployment issues that could unintentionally exclude people. Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities. Governments should review the impact of workplace, governmental and social AI on the opportunities and rights of poor people, Indigenous peoples and vulnerable members of society. In particular, the impact of overlapping AI systems toward profiling and marginalization should be identified and countered.
As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.
2. Right to Human Determination.
Published by: The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence
All individuals have the right to a final determination made by a person.
The Right to a Human Determination reaffirms that individuals and not machines are responsible for automated decision making. In many instances, such as the operation of an autonomous vehicle, it would not be possible or practical to insert a human decision prior to an automated decision. But the aim remains to ensure accountability. Thus where an automated system fails, this principle should be understood as a requirement that a human assessment of the outcome be made.
1. Demand That AI Systems Are Transparent
A transparent artificial intelligence system is one in which it is possible to discover how, and why, the system made a decision, or in the case of a robot, acted the way it did.
A. We stress that open source code is neither necessary nor sufficient for transparency – clarity cannot be obfuscated by complexity.
B. For users, transparency is important because it builds trust in, and understanding of, the system, by providing a simple way for the user to understand what the system is doing and why.
C. For validation and certification of an AI system, transparency is important because it exposes the system’s processes for scrutiny.
D. If accidents occur, the AI will need to be transparent and accountable to an accident investigator, so the internal process that led to the accident can be understood.
E. Workers must have the right to demand transparency in the decisions and outcomes of AI systems as well as the underlying algorithms (see principle 4 below). This includes the right to appeal decisions made by AI algorithms, and having it reviewed by a human being.
F. Workers must be consulted on AI systems’ implementation, development and deployment.
G. Following an accident, judges, juries, lawyers, and expert witnesses involved in the trial process require transparency and accountability to inform evidence and decision making.
The principle of transparency is a prerequisite for ascertaining that the remaining principles are observed.
See Principle 2 below for operational solution.