In society premised on AI, it is possible to estimate each person’s political position, economic situation, hobbies preferences, etc. with high accuracy from data on the data subject’s personal behavior. This means, when utilizing AI, that more careful treatment of personal data is necessary than simply utilizing personal information. To ensure that people are not suffered disadvantages from unexpected sharing or utilization of personal data through the internet for instance, each stakeholder must handle personal data based on the following principles.
Companies or government should not infringe individual person’s freedom, dignity and equality in utilization of personal data with AI technologies.
AI that uses personal data should have a mechanism that ensures accuracy and legitimacy and enable the person herself himself to be substantially involved in the management of her his privacy data. As a result, when using the AI, people can provide personal data without concerns and effectively benefit from the data they provide.
Personal data must be properly protected according to its importance and sensitivity. Personal data varies from those unjust use of which would be likely to greatly affect rights and benefits of individuals (Typically thought and creed, medical history, criminal record, etc.) to those that are semi public in social life. Taking this into consideration, we have to pay enough attention to the balance between the use and protection of personal data based on the common understanding of society and the cultural background.
2 RESPECT FOR AUTONOMY PRINCIPLE
AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.
1) AIS must allow individuals to fulﬁll their own moral objectives and their conception of a life worth living.
2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms.
3) Public institutions must not use AIS to promote or discredit a particular conception of the good life.
4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking.
5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination.
6) The development of AIS must avoid creating dependencies through attention capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans.
5 DEMOCRATIC PARTICIPATION PRINCIPLE
AIS must meet intelligibility, justiﬁability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.
1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators.
2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justiﬁable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justiﬁcation consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justiﬁcation we would demand of a human making the same kind of decision.
3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for veriﬁcation and control purposes.
4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation.
5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused.
6) For public AIS that have a signiﬁcant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use.
7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for.
8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS.
9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person.
10) Artiﬁcial intelligence research should remain open and accessible to all.
7 DIVERSITY INCLUSION PRINCIPLE
The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences.
1) AIS development and use must not lead to the homogenization of society through the standardization of behaviours and opinions.
2) From the moment algorithms are conceived, AIS development and deployment must take into consideration the multitude of expressions of social and cultural diversity present in the society.
3) AI development environments, whether in research or industry, must be inclusive and reﬂect the diversity of the individuals and groups of the society.
4) AIS must avoid using acquired data to lock individuals into a user proﬁle, ﬁx their personal identity, or conﬁne them to a ﬁltering bubble, which would restrict and conﬁne their possibilities for personal development — especially in ﬁelds such as education, justice, or business.
5) AIS must not be developed or used with the aim of limiting the free expression of ideas or the opportunity to hear diverse opinions, both of which being essential conditions of a democratic society.
6) For each service category, the AIS offering must be diversiﬁed to prevent de facto monopolies from forming and undermining individual freedoms.
8 PRUDENCE PRINCIPLE
Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them.
1) It is necessary to develop mechanisms that consider the potential for the double use — beneﬁcial and harmful —of AI research and AIS development (whether public or private) in order to limit harmful uses.
2) When the misuse of an AIS endangers public health or safety and has a high probability of occurrence, it is prudent to restrict open access and public dissemination to its algorithm.
3) Before being placed on the market and whether they are offered for charge or for free, AIS must meet strict reliability, security, and integrity requirements and be subjected to tests that do not put people’s lives in danger, harm their quality of life, or negatively impact their reputation or psychological integrity. These tests must be open to the relevant public authorities and stakeholders.
4) The development of AIS must preempt the risks of user data misuse and protect the integrity and conﬁdentiality of personal data.
5) The errors and ﬂaws discovered in AIS and SAAD should be publicly shared, on a global scale, by public institutions and businesses in sectors that pose a signiﬁcant danger to personal integrity and social organization.