Second principle: Responsibility
Human responsibility for AI enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.
Second principle: Responsibility
The increased speed, complexity and automation of AI enabled systems may complicate our understanding of pre existing concepts of human control, responsibility and accountability.
Second principle: Responsibility
Collectively, these articulations of human control, responsibility and risk ownership must enable clear accountability for the outcomes of any AI enabled system in Defence.
Second principle: Responsibility
There must be no deployment or use without clear lines of responsibility and accountability, which should not be accepted by the designated duty holder unless they are satisfied that they can exercise control commensurate with the various risks.
Third principle: Understanding
While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable.