2. Accuracy

Identify, log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that expected and worst case implications can be understood and can inform mitigation procedures.
Principle: A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

Published by Personal Data Protection Commission (PDPC), Singapore

Related Principles

· Transparency

As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

Accuracy

Identify, log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that expected and worst case implications can be understood and inform mitigation procedures.

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) in Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

3. Safe

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed. Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle. Why it matters Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed. Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

· Prepare Input Data:

1 The exercise of data procurement, management, and organization should uphold the legal frameworks and standards of data privacy. Data privacy and security protect information from a wide range of threats. 2 The confidentiality of data ensures that information is accessible only to those who are authorized to access the information and that there are specific controls that manage the delegation of authority. 3 Designers and engineers of the AI system must exhibit the appropriate levels of integrity to safeguard the accuracy and completeness of information and processing methods to ensure that the privacy and security legal framework and standards are followed. They should also ensure that the availability and storage of data are protected through suitable security database systems. 4 All processed data should be classified to ensure that it receives the appropriate level of protection in accordance with its sensitivity or security classification and that AI system developers and owners are aware of the classification or sensitivity of the information they are handling and the associated requirements to keep it secure. All data shall be classified in terms of business requirements, criticality, and sensitivity in order to prevent unauthorized disclosure or modification. Data classification should be conducted in a contextual manner that does not result in the inference of personal information. Furthermore, de identification mechanisms should be employed based on data classification as well as requirements relating to data protection laws. 5 Data backups and archiving actions should be taken in this stage to align with business continuity, disaster recovery and risk mitigation policies.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

3 Ensure transparency, explainability and intelligibility

AI should be intelligible or understandable to developers, users and regulators. Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology. Transparency requires that sufficient information (described below) be published or documented before the design and deployment of an AI technology. Such information should facilitate meaningful public consultation and debate on how the AI technology is designed and how it should be used. Such information should continue to be published and documented regularly and in a timely manner after an AI technology is approved for use. Transparency will improve system quality and protect patient and public health safety. For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight. It must be possible to audit an AI technology, including if something goes wrong. Transparency should include accurate information about the assumptions and limitations of the technology, operating protocols, the properties of the data (including methods of data collection, processing and labelling) and development of the algorithmic model. AI technologies should be explainable to the extent possible and according to the capacity of those to whom the explanation is directed. Data protection laws already create specific obligations of explainability for automated decision making. Those who might request or require an explanation should be well informed, and the educational information must be tailored to each population, including, for example, marginalized populations. Many AI technologies are complex, and the complexity might frustrate both the explainer and the person receiving the explanation. There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability). All algorithms should be tested rigorously in the settings in which the technology will be used in order to ensure that it meets standards of safety and efficacy. The examination and validation should include the assumptions, operational protocols, data properties and output decisions of the AI technology. Tests and evaluations should be regular, transparent and of sufficient breadth to cover differences in the performance of the algorithm according to race, ethnicity, gender, age and other relevant human characteristics. There should be robust, independent oversight of such tests and evaluation to ensure that they are conducted safely and effectively. Health care institutions, health systems and public health agencies should regularly publish information about how decisions have been made for adoption of an AI technology and how the technology will be evaluated periodically, its uses, its known limitations and the role of decision making, which can facilitate external auditing and oversight.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021