6. Provide transparency, explainability and accountability for children

Strive to explicitly address children when promoting explainability and transparency of AI systems. Use age appropriate language to describe AI. Make AI systems transparent to the extent that children and their caregivers can understand the interaction. Develop AI systems so that they protect and empower child users according to legal and policy frameworks, regardless of children's understanding of the system. Review, update and develop AI related regulatory frameworks to integrate child rights. Establish AI oversight bodies compliant with principles and regulations and set up mechanisms for redress.
Principle: Requirements for child-centred AI, Sep 16, 2020

Published by United Nations Children's Fund (UNICEF) and the Ministry of

Related Principles

6. Accountability and Integrity

There needs to be human accountability and control in the design, development, and deployment of AI systems. Deployers should be accountable for decisions made by AI systems and for the compliance with applicable laws and respect for AI ethics and principles. AI actors9 should act with integrity throughout the AI system lifecycle when designing, developing, and deploying AI systems. Deployers of AI systems should ensure the proper functioning of AI systems and its compliance with applicable laws, internal AI governance policies and ethical principles. In the event of a malfunction or misuse of the AI system that results in negative outcomes, responsible individuals should act with integrity and implement mitigating actions to prevent similar incidents from happening in the future. To facilitate the allocation of responsibilities, organisations should adopt clear reporting structures for internal governance, setting out clearly the different kinds of roles and responsibilities for those involved in the AI system lifecycle. AI systems should also be designed, developed, and deployed with integrity – any errors or unethical outcomes should at minimum be documented and corrected to prevent harm to users upon deployment

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2024

· Legal system improvement

Stakeholders of AI should consciously and strictly abide by the code of conduct, laws and regulations, and technical specifications related to children. AI legislations should pay attention to the impact of AI on children's rights and interests, and should make it clearly and effectively reflected in the legal system. Both governance institutions and strict review and accountability mechanisms should be established to severely punish individuals and groups that abuse AI to infringe upon children's rights and interests.

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development. in Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

• Require Accountability for Ethical Design and Implementation

The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. [Recommendations] • Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles. • Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.

Published by Intel in AI public policy principles, Oct 18, 2017

Chapter 2. The Norms of Management

  5. Promotion of agile governance. Respect the law of development of AI, fully understand the potential and limitations of AI, continue to optimize the governance mechanisms and methods of AI. Do not divorce from reality, do not rush for quick success and instant benefits in the process of strategic decision making, institution construction, and resource allocation. Promote the healthy and sustainable development of AI in an orderly manner.   6. Active practice. Comply with AI related laws, regulations, policies and standards, actively integrate AI ethics into the entire management process, take the lead in becoming practitioners and promoters of AI ethics and governance, summarize and promote AI governance experiences in a timely manner, and actively respond to the society’s concerns on the ethics of AI.   7. Exercise and use power correctly. Clarify the responsibilities and power boundaries of AI related management activities, and standardize the conditions and procedures of power operations. Fully respect and protect the privacy, freedom, dignity, safety and other rights of relevant stakeholders and other legal rights and interests, and prohibit improper use of power to infringe the legal rights of natural persons, legal persons and other organizations.   8. Strengthen risk preventions. Enhance bottom line thinking and risk awareness, strengthen the research and judgment on the potential risks during the development of AI, carry out systematic risk monitoring and evaluations in a timely manner, establish an effective early warning mechanism for risks, and enhance the ability of manage, control, and disposal of ethical risks of AI.   9. Promote inclusivity and openness. Pay full attention to the rights and demands of all stakeholders related to AI, encourage the application of diverse AI technologies to solve practical problems in economic and social development, encourage cross disciplinary, cross domain, cross regional, and cross border exchanges and cooperation, and promote the formation of AI governance frameworks, standards and norms with broad consensus.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

2. Ensure inclusion of and for children

Strive for diversity amongst those who design, develop, collect and process data, implement, research, regulate and oversee AI systems. Adopt an inclusive design approach when developing AI products that will be used by children or impact them. Support meaningful child participation, both in AI policies and in the design and development processes.

Published by United Nations Children's Fund (UNICEF) and the Ministry of in Requirements for child-centred AI, Sep 16, 2020