Linking Artificial Intelligence Principles
prejudice and sciolism lead to either demonization of progress or to blind acknowledgment, both calling for educational work.
Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups.
The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.
To ensure that our use of AI does not inadvertently prejudice the treatment of particular groups in society, we call for the Government to incentivise the development of new approaches to the auditing of datasets used in AI, and to encourage greater diversity in the training and recruitment of AI specialists.
It is also encouraged that, to the extent possible in light of the characteristics of the technologies to be adopted, developers make efforts to take necessary measures so as not to cause unfair discrimination resulting from prejudice included in the learning data of the AI systems.
Through technology advancement and management improvement, prejudices and discriminations should be eliminated as much as possible in the process of data acquisition, algorithm design, technology development, and product development and application.