As AI increasingly changes the nature of work, workers, customers and vendors need to have information about how AI systems operate so that they can understand how decisions are made. Their involvement will help to identify potential bias, errors and unintended outcomes. Transparency is not necessarily nor only a question of open source code. While in some circumstances open source code will be helpful, what is more important are clear, complete and testable explanations of what the system is doing and why. Intellectual property, and sometimes even cyber security, is rewarded by a lack of transparency. Innovation generally, including in algorithms, is a value that should be encouraged. How, then, are these competing values to be balanced? One possibility is to require algorithmic verifiability rather than full algorithmic disclosure. Algorithmic verifiability would require companies to disclose not the actual code driving the algorithm but information allowing the effect of their algorithms to be independently assessed. In the absence of transparency regarding their algorithms’ purpose and actual effect, it is impossible to ensure that competition, labour, workplace safety, privacy and liability laws are being upheld. When accidents occur, the AI and related data will need to be transparent and accountable to an accident investigator, so that the process that led to the accident can be understood.
3 PROTECTION OF PRIVACY AND INTIMACY PRINCIPLE
Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS).
1) Personal spaces in which people are not subjected to surveillance or digital evaluation must be protected from the intrusion of AIS and data acquisition and archiving systems (DAAS).
2) The intimacy of thoughts and emotions must be strictly protected from AIS and DAAS uses capable of causing harm, especially uses that impose moral judgments on people or their lifestyle choices.
3) People must always have the right to digital disconnection in their private lives, and AIS should explicitly offer the option to disconnect at regular intervals, without encouraging people to stay connected.
4) People must have extensive control over information regarding their preferences. AIS must not create individual preference proﬁles to inﬂuence the behavior of the individuals without their free and informed consent.
5) DAAS must guarantee data conﬁdentiality and personal proﬁle anonymity.
6) Every person must be able to exercise extensive control over their personal data, especially when it comes to its collection, use, and dissemination. Access to AIS and digital services by individuals must not be made conditional on their abandoning control or ownership of their personal data.
7) Individuals should be free to donate their personal data to research organizations in order to contribute to the advancement of knowledge.
8) The integrity of one’s personal identity must be guaranteed. AIS must not be used to imitate or alter a person’s appearance, voice, or other individual characteristics in order to damage one’s reputation or manipulate other people.
8 PRUDENCE PRINCIPLE
Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them.
1) It is necessary to develop mechanisms that consider the potential for the double use — beneﬁcial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses.
2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm.
3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders.
4) The development of AIS must preempt the risks of user data misuse and protect the integrity and conﬁdentiality of personal data.
5) The errors and ﬂaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a signiﬁcant danger to personal integrity and social organization.
9 RESPONSIBILITY PRINCIPLE
The development and use of AIS must not contribute to lessen the responsibility of human beings when decisions must be made.
1) Only human beings can be held responsible for decisions stemming from recommendations made by AIS, and the actions that proceed therefrom.
2) In all areas where a decision that affects a person’s life, quality of life, or reputation must be made, where time and circumstance permit, the ﬁnal decision must be taken by a human being and that decision should be free and informed
3) The decision to kill must always be made by human beings, and responsibility for this decision must not be transferred to an AIS.
4) People who authorize AIS to commit a crime or an offence, or demonstrate negligence by allowing AIS to commit them, are responsible for this crime or offence.
5) When damage or harm has been inﬂicted by an AIS, and the AIS is proven to be reliable and to have been used as intended, it is not reasonable to place blame on the people involved in its development or use.
2. Transparent and explainable AI
We will be explicit about the kind of personal and or non personal data the AI systems uses as well as about the purpose the data is used for. When people directly interact with an AI system, we will be transparent to the users that this is the case.
When AI systems take, or support, decisions we take the technical and organizational measures required to guarantee a level of understanding adequate to the application area. In any case, if the decisions significantly affect people's lives, we will ensure we understand the logic behind the conclusions. This will also apply when we use third party technology.