Ensure fairness

We are fully determined to combat all types of reducible bias in data collection, derivation, and analysis. Our teams are trained to identify and challenge biases in our own decision making and in the data we use to train and test our models. All data sets are evaluated for fairness, possible inclusion of sensitive data and implicitly discriminatory collection models. We execute statistical tests to look for imbalance and skewed datasets and include methods to augment datasets to combat these statistical biases. We pressure test our decisions by performing peer review of model design, execution, and outcomes; this includes peer review of model training and performance metrics. Before a model is graduated from one development stage to the next, a review is conducted with required acceptance criteria. This review includes in sample and out of sample testing to mitigate the risk of model overfitting to the training data, and biased outcomes in production. We subscribe to the principles laid out in the Department of Defense’s AI ethical principles: that AI technologies should be responsible, equitable, traceable, reliable, and governable.
Principle: AI Ethical Principles, January 2023

Published by Rebelliondefense

Related Principles

1. Accountability and Transparency

o ADP believes that human oversight is core to providing reliable ML results. We have implemented audit and risk assessments to test our models as the baseline of our oversight methodologies. We continue to actively monitor and improve our models and systems to ensure that changes in the underlying data or model conditions do not inappropriately affect the desired results. o ADP provides information as to how we handle personal data in the relevant privacy statement that is made available to our clients’ employees, consumers or job applicants.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

Uphold high standards of scientific and technological excellence

Rebellion is committed to scientific excellence as we advance the development, testing, and deployment of artificial intelligence and other technologies. We believe AI should be interpretable and explainable to its human users. Because AI models are evolving so quickly, we will always seek out best practices as they evolve in artificial intelligence and throughout the software and defense industries. We strive for scientific rigor such that all scientific investigations, research, and practices are conducted with the highest level of precision and accuracy. This includes strict adherence to protocols, accurate data collection and analysis, and careful interpretation of results. Our team communicates research findings and methodologies clearly and openly in a manner that allows for the replication of results by independent researchers. Black box systems are antithetical to these standards. We build our technology to be intuitive and explainable in simple terms. In addition, we ensure the safety and security of the research, development, and production environments.

Published by Rebelliondefense in AI Ethical Principles, January 2023

· Prepare Input Data:

1 Following the best practice of responsible data acquisition, handling, classification, and management must be a priority to ensure that results and outcomes align with the AI system’s set goals and objectives. Effective data quality soundness and procurement begin by ensuring the integrity of the data source and data accuracy in representing all observations to avoid the systematic disadvantaging of under represented or advantaging over represented groups. The quantity and quality of the data sets should be sufficient and accurate to serve the purpose of the system. The sample size of the data collected or procured has a significant impact on the accuracy and fairness of the outputs of a trained model. 2 Sensitive personal data attributes which are defined in the plan and design phase should not be included in the model data not to feed the existing bias on them. Also, the proxies of the sensitive features should be analyzed and not included in the input data. In some cases, this may not be possible due to the accuracy or objective of the AI system. In this case, the justification of the usage of the sensitive personal data attributes or their proxies should be provided. 3 Causality based feature selection should be ensured. Selected features should be verified with business owners and non technical teams. 4 Automated decision support technologies present major risks of bias and unwanted application at the deployment phase, so it is critical to set out mechanisms to prevent harmful and discriminatory results at this phase.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Prepare Input Data:

1 An important aspect of the Accountability and Responsibility principle during Prepare Input Data step in the AI System Lifecycle is data quality as it affects the outcome of the AI model and decisions accordingly. It is, therefore, important to do necessary data quality checks, clean data and ensure the integrity of the data in order to get accurate results and capture intended behavior in supervised and unsupervised models. 2 Data sets should be approved and signed off before commencing with developing the AI model. Furthermore, the data should be cleansed from societal biases. In parallel with the fairness principle, the sensitive features should not be included in the model data. In the event that sensitive features need to be included, the rationale or trade off behind the decision for such inclusion should be clearly explained. The data preparation process and data quality checks should be documented and validated by responsible parties. 3 The documentation of the process is necessary for auditing and risk mitigation. Data must be properly acquired, classified, processed, and accessible to ease human intervention and control at later stages when needed.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

· Build and Validate:

1 Model development of the AI system and algorithm should consist of the selection of features, hyperparameter tuning and performance metric selection. To achieve this, the technical stakeholders who build and validate models should be responsible for these decisions. 2 Assigning the appropriate ownership and communicating responsibilities will set the tone for accountability that would aid in steering the development of the AI system on good reasons, solid interference, and will allow the intervention of human critical judgement and expertise. 3 The decisions should be supported with quantitative (performance measures on train test datasets, consistency of the performance on different sensitive groups, performance comparison for each set of hyperparameters, etc.) and qualitative indicators (decisions to mitigate and correct unintended risks from inaccurate predictions). 4 The appropriate stakeholders and owners of the AI technology should review and sign off the model after successful testing and validation of user acceptance testing rounds have been conducted and completed before the AI models can be productionized.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022