1 WELL BEING PRINCIPLE

The development and use of artificial intelligence systems (AIS) must permit the growth of the well being of all sentient beings. 1) AIS must help individuals improve their living conditions, their health, and their working conditions. 2) AIS must allow individuals to pursue their preferences, so long as they do not cause harm to other sentient beings. 3) AIS must allow people to exercise their mental and physical capacities. 4) AIS must not become a source of ill being, unless it allows us to achieve a superior well being than what one could attain otherwise. 5) AIS use should not contribute to increasing stress, anxiety, or a sense of being harassed by one’s digital environment.
Principle: The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

Published by University of Montreal

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

2 RESPECT FOR AUTONOMY PRINCIPLE

AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings. 1) AIS must allow individuals to fulfill their own moral objectives and their conception of a life worth living. 2) AIS must not be developed or used to impose a particular lifestyle on individuals, whether directly or indirectly, by implementing oppressive surveillance and evaluation or incentive mechanisms. 3) Public institutions must not use AIS to promote or discredit a particular conception of the good life. 4) It is crucial to empower citizens regarding digital technologies by ensuring access to the relevant forms of knowledge, promoting the learning of fundamental skills (digital and media literacy), and fostering the development of critical thinking. 5) AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination. 6) The development of AIS must avoid creating dependencies through attention capturing techniques or the imitation of human characteristics (appearance, voice, etc.) in ways that could cause confusion between AIS and humans.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

3 PROTECTION OF PRIVACY AND INTIMACY PRINCIPLE

Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS). 1) Personal spaces in which people are not subjected to surveillance or digital evaluation must be protected from the intrusion of AIS and data acquisition and archiving systems (DAAS). 2) The intimacy of thoughts and emotions must be strictly protected from AIS and DAAS uses capable of causing harm, especially uses that impose moral judgments on people or their lifestyle choices. 3) People must always have the right to digital disconnection in their private lives, and AIS should explicitly offer the option to disconnect at regular intervals, without encouraging people to stay connected. 4) People must have extensive control over information regarding their preferences. AIS must not create individual preference profiles to influence the behavior of the individuals without their free and informed consent. 5) DAAS must guarantee data confidentiality and personal profile anonymity. 6) Every person must be able to exercise extensive control over their personal data, especially when it comes to its collection, use, and dissemination. Access to AIS and digital services by individuals must not be made conditional on their abandoning control or ownership of their personal data. 7) Individuals should be free to donate their personal data to research organizations in order to contribute to the advancement of knowledge. 8) The integrity of one’s personal identity must be guaranteed. AIS must not be used to imitate or alter a person’s appearance, voice, or other individual characteristics in order to damage one’s reputation or manipulate other people.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

4 SOLIDARITY PRINCIPLE

The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations. 1) AIS must not threaten the preservation of fulfilling moral and emotional human relationships, and should be developed with the goal of fostering these relationships and reducing people’s vulnerability and isolation. 2) AIS must be developed with the goal of collaborating with humans on complex tasks and should foster collaborative work between humans. 3) AIS should not be implemented to replace people in duties that require quality human relationships, but should be developed to facilitate these relationships. 4) Health care systems that use AIS must take into consideration the importance of a patient’s relationships with family and health care staff. 5) AIS development should not encourage cruel behavior toward robots designed to resemble human beings or non human animals in appearance or behavior. 6) AIS should help improve risk management and foster conditions for a society with a more equitable and mutual distribution of individual and collective risks.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

5 DEMOCRATIC PARTICIPATION PRINCIPLE

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control. 1) AIS processes that make decisions affecting a person’s life, quality of life, or reputation must be intelligible to their creators. 2) The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable in a language that is understood by the people who use them or who are subjected to the consequences of their use. Justification consists in making transparent the most important factors and parameters shaping the decision, and should take the same form as the justification we would demand of a human making the same kind of decision. 3) The code for algorithms, whether public or private, must always be accessible to the relevant public authorities and stakeholders for verification and control purposes. 4) The discovery of AIS operating errors, unexpected or undesirable effects, security breaches, and data leaks must imperatively be reported to the relevant public authorities, stakeholders, and those affected by the situation. 5) In accordance with the transparency requirement for public decisions, the code for decision making algorithms used by public authorities must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused. 6) For public AIS that have a significant impact on the life of citizens, citizens should have the opportunity and skills to deliberate on the social parameters of these AIS, their objectives, and the limits of their use. 7) We must at all times be able to verify that AIS are doing what they were programmed for and what they are used for. 8) Any person using a service should know if a decision concerning them or affecting them was made by an AIS. 9) Any user of a service employing chatbots should be able to easily identify whether they are interacting with an AIS or a real person. 10) Artificial intelligence research should remain open and accessible to all.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018