3. Traceable.

DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
Principle: AI Ethics Principles for DoD, Oct 31, 2019

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States

Related Principles

· Independent Global AI Safety and Verification Research

Independent research into AI safety and verification is critical to develop techniques to ensure the safety of advanced AI systems. States, philanthropists, corporations and experts should enable global independent AI safety and verification research through a series of Global AI Safety and Verification Funds. These funds should scale to a significant fraction of global AI research and development expenditures to adequately support and grow independent research capacity. In addition to foundational AI safety research, these funds would focus on developing privacy preserving and secure verification methods, which act as enablers for domestic governance and international cooperation. These methods would allow states to credibly check an AI developer’s evaluation results, and whether mitigations specified in their safety case are in place. In the future, these methods may also allow states to verify safety related claims made by other states, including compliance with the Safety Assurance Frameworks and declarations of significant training runs. Eventually, comprehensive verification could take place through several methods, including third party governance (e.g., independent audits), software (e.g., audit trails) and hardware (e.g., hardware enabled mechanisms on AI chips). To ensure global trust, it will be important to have international collaborations developing and stress testing verification methods. Critically, despite broader geopolitical tensions, globally trusted verification methods have allowed, and could allow again, states to commit to specific international agreements.

Published by IDAIS (International Dialogues on AI Safety) in IDAIS-Venice, Sept 5, 2024

C. Explainability and Traceability:

AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and or national level.

Published by The North Atlantic Treaty Organization (NATO) in NATO Principles of Responsible Use of Artificial Intelligence in Defence, Oct 22, 2021

Chapter 3. The Norms of Research and Development

  10. Strengthen the awareness of self discipline. Strengthen self discipline in activities related to AI research and development, actively integrate AI ethics into every phase of technology research and development, consciously carry out self censorship, strengthen self management, and do not engage in AI research and development that violates ethics and morality.   11. Improve data quality. In the phases of data collection, storage, use, processing, transmission, provision, disclosure, etc., strictly abide by data related laws, standards and norms. Improve the completeness, timeliness, consistency, normativeness and accuracy of data.   12. Enhance safety, security and transparency. In the phases of algorithm design, implementation, and application, etc., improve transparency, interpretability, understandability, reliability, and controllability, enhance the resilience, adaptability, and the ability of anti interference of AI systems, and gradually realize verifiable, auditable, supervisable, traceable, predictable and trustworthy AI.   13. Avoid bias and discrimination. During the process of data collection and algorithm development, strengthen ethics review, fully consider the diversity of demands, avoid potential data and algorithmic bias, and strive to achieve inclusivity, fairness and non discrimination of AI systems.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

· Build and Validate:

1 To develop a sound and functional AI system that is both reliable and safe, the AI system’s technical construct should be accompanied by a comprehensive methodology to test the quality of the predictive data based systems and models according to standard policies and protocols. 2 To ensure the technical robustness of an AI system rigorous testing, validation, and re assessment as well as the integration of adequate mechanisms of oversight and controls into its development is required. System integration test sign off should be done with relevant stakeholders to minimize risks and liability. 3 Automated AI systems involving scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions should trigger human oversight and final determination. Furthermore, AI systems should not be used for social scoring or mass surveillance purposes.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

3. Traceable

The department's AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources and design procedures and documentation.

Published by Department of Defense (DoD), United States in DoD's AI ethical principles, Feb 24, 2020