Second principle: Responsibility
Human responsibility for AI enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.
Second principle: Responsibility
The increased speed, complexity and automation of AI enabled systems may complicate our understanding of pre existing concepts of human control, responsibility and accountability.
Second principle: Responsibility
Human responsibility for the use of AI enabled systems in Defence must be underpinned by a clear and consistent articulation of the means by which human control is exercised, and the nature and limitations of that control.
Second principle: Responsibility
While the level of human control will vary according to the context and capabilities of each AI enabled system, the ability to exercise human judgement over their outcomes is essential.
Second principle: Responsibility
Collectively, these articulations of human control, responsibility and risk ownership must enable clear accountability for the outcomes of any AI enabled system in Defence.
Third principle: Understanding
To enable this understanding, we must be able to verify that our AI enabled systems work as intended.
Fifth principle: Reliability
Those parameters must be regularly reviewed and tested for reliability to be assured on an ongoing basis, particularly as AI enabled systems learn and evolve over time, or are deployed in new contexts.