4. Control

We monitor AI solutions so that we are continuously ready to intervene into AI, datasets and algorithms, to identify needs for improvements and to prevent and or reduce damage.
Principle: Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Published by Telia Company AB

Related Principles

7. Robustness and Reliability

AI systems should be sufficiently robust to cope with errors during execution and unexpected or erroneous input, or cope with stressful environmental conditions. It should also perform consistently. AI systems should, where possible, work reliably and have consistent results for a range of inputs and situations. AI systems may have to operate in real world, dynamic conditions where input signals and conditions change quickly. To prevent harm, AI systems need to be resilient to unexpected data inputs, not exhibit dangerous behaviour, and continue to perform according to the intended purpose. Notably, AI systems are not infallible and deployers should ensure proper access control and protection of critical or sensitive systems and take actions to prevent or mitigate negative outcomes that occur due to unreliable performances. Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across a range of situations and environments. Measures such as proper documentation of data sources, tracking of data processing steps, and data lineage can help with troubleshooting AI systems.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

6. We set the framework.

Our AI solutions are developed and enhanced on grounds of deep analysis and evaluation. They are transparent, auditable, fair, and fully documented. We consciously initiate the AI’s development for the best possible outcome. The essential paradigm for our AI systems’ impact analysis is “privacy und security by design”. This is accompanied e.g. by risks and chances scenarios or reliable disaster scenarios. We take great care in the initial algorithm of our own AI solutions to prevent so called “Black Boxes” and to make sure that our systems shall not unintentionally harm the users

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

7. We maintain control.

We are able to deactivate and stop AI systems at any time (kill switch). Additionally, we remove inappropriate data to avoid bias. We have an eye on the decisions made and the information fed to the system in order to enhance decision quality. We take responsibility for a diverse and appropriate data input. In case of inconsistencies, we rather stop the AI system than pursue with potentially manipulated data. We are also able to “reset” our AI systems in order to remove false or biased data. By this, we install a lever to reduce (unintended) unsuitable decisions or actions to a minimum.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

II. Technical robustness and safety

Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. AI systems need to be reliable, secure enough to be resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and they must ensure a fall back plan in case of problems. Their decisions must be accurate, or at least correctly reflect their level of accuracy, and their outcomes should be reproducible. In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned. This includes the minimisation and where possible the reversibility of unintended consequences or errors in the system’s operation. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be put in place.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

Practice holism and do not reduce our ethical focus to components

We provide integrated technologies to defend and support democracy. We do not fixate only on algorithms and data in a silo, but rather take a holistic view of the potential impact of AI on outcomes to avoid unintended consequences in the real world. We aim to ensure that the entire systems we develop have the capability to manage data quality while upholding governance around software and models. We routinely employ statistical analyses to search for unwarranted data, model, and outcome bias.

Published by Rebelliondefense in AI Ethical Principles, January 2023