Scope of the Guidelines

The “Sony Group AI Ethics Guidelines” (Guidelines) set forth the guidelines that must be followed by all officers and employees of Sony when utilizing AI and or conducting AI related R&D. "Utilization of AI" within Sony means the following: 1. The provision of products and services by Sony, including entertainment content and financial services, which utilize AI; and 2. The usage of AI for various purposes by Sony in its business activities such as R&D, product manufacturing, service provision, and other operational activities.
Principle: Sony Group AI Ethics Guidelines, Sep 25, 2018

Published by Sony Group

Related Principles

3. Principle of collaboration

AI service providers, business users, and data providers should pay attention to the collaboration of AI systems or AI services. Users should take it into consideration that risks might occur and even be amplified when AI systems are to be networked. [Main points to discuss] A) Attention to the interconnectivity and interoperability of AI systems AI network service providers may be expected to pay attention to the interconnectivity and interoperability of AI with consideration of the characteristics of AI to be used and its usage, in order to promote the benefits of AI through the sound progress of AI networking. B) Address to the standardization of data formats, protocols, etc. AI service providers and business users may be expected to address the standardization of data formats, protocols, etc. in order to promote cooperation among AI systems and between AI systems and other systems, etc. Also, data providers may be expected to address the standardization of data formats. C) Attention to problems caused and amplified by AI networking Although it is expected that collaboration of AI promotes the benefits, users may be expected to pay attention to the possibility that risks (e.g. the risk of loss of control by interconnecting or collaborating their AI systems with other AI systems, etc. through the Internet or other network) might be caused or amplified by AI networking. [Problems (examples) over risks that might become realized and amplified by AI networking] • Risks that one AI system's trouble, etc. spreads to the entire system. • Risks of failures in the cooperation and adjustment between AI systems. • Risks of failures in verifying the judgment and the decision making of AI (risks of failure to analyze the interactions between AI systems because the interactions become complicated). • Risks that the influence of a small number of AI becomes too strong (risks of enterprises and individuals suffering disadvantage by the judgment of a few AI systems). • Risks of the infringement of privacy as a result of information sharing across fields and the concentration of information to one specific AI. • Risks of unexpected actions of AI.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

10.Principle of accountability

AI service providers and business users should make efforts to fulfill their accountability to the stakeholders including consumer users and indirect users. [Main points to discuss] A) Efforts to fulfill accountability In light of the characteristics of AI to be used and its purpose, etc., AI service providers and business users may be expected to make efforts to establish appropriate accountability to consumer users, indirect users, and third parties affected by the use of AI, to gain enough trust in AI from people and society. B) Notification and publication of usage policy on AI systems or AI services AI service providers and business users may be expected to notify or announce the usage policy on AI (the fact that they provide AI services, the scope and manner of proper AI utilization, the risks associated with the utilization, and the establishment of a consultation desk) in order to enable consumer users and indirect users to recognize properly the usage of AI. In light of the characteristics of the technologies to be used and their usage, we have to focus on which cases will lead to the usage policy is expected to be notified or announced as well as what content is expected to be included in the usage policy.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

(Preamble)

AI Ethics Code (hereinafter referred to as the Code) establishes general ethical principles and standards of conduct to be followed by those involved in activities in the field of artificial intelligence (hereinafter referred to as AI Actors) in their actions, as well as the mechanisms of implementation of Code’s provisions. The Code applies to relations that cover ethical aspects of the creation (design, construction, piloting), integration and use of AI technologies at all stages, which are currently not regulated by the national legislation and international rules and or by acts of technical regulation. The recommendations of this Code are designed for artificial intelligence systems (hereinafter referred to as AI systems) used exclusively for civil (nonmilitary) purposes. The provisions of the Code may be expanded and or specified for individual groups of AI Actors in sectorial or local documents on ethics in the field of AI considering the development of technologies, the specifics of the tasks being solved, the class and purpose of AI systems and the level of possible risks, as well as the specific context and environment in which AI systems are being used.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· 6. MAXIMUM TRANSPARENCY AND RELIABILITY OF INFORMATION CONCERNING THE LEVEL OF AI TECHNOLOGIES DEVELOPMENT, THEIR CAPABILITIES AND RISKS ARE CRUCIAL

6.1. Reliability of information about AI systems. AI Actors are encouraged to provide AI systems users with reliable information about the AI systems and most effective methods of their use, harms, benefits acceptable areas and existing limitations of their use. 6.2. Awareness raising in the field of ethical AI application. AI Actors are encouraged to carry out activities aimed at increasing the level of trust and awareness of the citizens who use AI systems and the society at large, in the field of technologies being developed, the specifics of ethical use of AI systems and other issues related to AI systems development by all available means, i.a. by working on scientific and journalistic publications, organizing scientific and public conferences or seminars, as well as by adding the provisions about ethical behavior to the rules of AI systems operation for users and (or) operators, etc.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· 1. THE BASICS OF THE CODE

1.1. Legal basis of the Code. The Code duly regards the national legislation of the AI Actors and international treaties. 1.2. Terminology. Terms and definitions in this Code are determined in accordance with applicable international regulatory legal acts and technical regulations in the field of AI. 1.3. AI Actors. For the purposes of this Code, AI Actors are defined as persons and entities, involved in the life cycle of AI systems, including those involved in the provision of goods and services. These include, but are not limited to, the following: • developers who create, train or test AI models systems and develop or implement such models systems, software and or hardware systems and take responsibility for their design; • customers (individuals or organizations) who receive a product or a service; • data providers and persons entities engaged in the formation of datasets for their further use in AI systems; • experts who measure and or assess the parameters of the developed models systems; • manufacturers engaged in the production of AI systems; • AI systems operating entities who legally own the relevant systems, use them for their intended purpose and directly solve practical tasks using AI systems; • operators (individuals or organizations) who ensure the functioning of AI systems; • persons entities with a regulatory impact in the field of AI, including those who work on regulatory and technical documents, manuals, various regulations, requirements and standards in the field of AI; • other persons entities whose actions can affect the results of the actions of AI systems or those who make decisions using AI systems.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)