“Perspectives on AI/Robotics and Work/Employment” Report is now available

“Perspectives on Artificial Intelligence/Robotics and Work/Employment” report translated by AIR is now available on the following sites:

The original report on this topic was published in Japanese as part of the Research Materials series of the “Science and Technology Research Project” of the Research and Legislative Reference Bureau, the National Diet Library. The Bureau is responsible for providing the Diet with legislative research and information services and is an associate member of the European Parliamentary Technology Assessment (EPTA), an international network of parliamentary science and technology policy institutions.

The report, “Perspectives on Artificial Intelligence/Robotics and Work/Employment,” was conducted as a “Science and Technology Research Project” in FY2017 and was published in March 2018 in Japanese. In May 2018, policy seminar pertaining to the report was held for Diet members, parliamentary staff, and others involved in the Diet.

The chairs of the report are members of a research group called Acceptable Intelligence with Responsibility (AIR:, an ad hoc interdisciplinary network. The report was compiled through the ongoing exchange of opinions among twenty-three authors of varied specialties and affiliations. AIR activities include research and surveys, field studies, oral history projects and organized events. Some of the case studies of this report are the results of the AIR community. The English translation is licensed by AIR and AIR takes full responsibility for the translation of the report. In the process of translation, authors were asked to replace Japanese references with English materials, as needed, for the convenience of readers. Comments from readers and further collaborative research opportunities from both the international and interdisciplinary realms are appreciated.

(Cited from “Preface to the English translation”)

Perspectives on Artificial Intelligence/Robotics and Work/Employment

Preface for English translation


Part 1 Trends in Research and Technology
Ⅰ Knowledge Processing and Machine Learning
Ⅱ Natural Language Processing
Ⅲ Image Acquisition and Recognition
Ⅳ Speech Interfaces
Ⅴ Human-Agent Interaction
Ⅵ Robots
Ⅶ Internet of Things
Ⅷ Multi-agent Systems
Ⅸ Crowdsourcing

Part 2 AI Trends by Domain
Ⅰ Healthcare
Ⅱ Elderly Care
Ⅲ Art and Design
Ⅳ Education
Ⅴ Hospitality
Ⅵ Transportation / Mobility
Ⅶ Agriculture
Ⅷ Public Order and Security
Column 1 AI applications for Defense and National Security Overseas
Column 2 Japanese Chess (Shōgi)

Part 3 AI and Employment Overseas, and in Development, Utilization and Management of Human Resources
Ⅰ AI, Robotics and Employment Policy Trends in US
Ⅱ AI, Robotics and Employment Policy Trends in EU and Germany
Ⅲ AI and Employment Issues in France
Ⅳ AI, Robotics, and Labor in the Chinese Workplace
Ⅴ Technological Innovation and Employment
Ⅵ Human Resources and Labor Management by IT and its Regulation: Japan and Overseas
Ⅶ Development and Recruitment of AI-related Human Resources


[Report] Appropriate distance between humans and machines: Robots, AI, and Enhancement

The relationship between human beings, robots, and artificial intelligence is becoming increasingly complicated.

Simultaneously, this poses a compelling question: what are human beings and what is intelligence? Can people and machines have emotional connections? Do machines extend human functions? In addition, do researchers’ responsibilities and technological innovation differ among countries?

At an event held on May 12, 2018, French, Russian, and Japanese researchers introduced research trends on the relationship between people and machines with various examples.

As an opening remark, Arisa Ema, from the University of Tokyo, stated that this event was international (France, Russia, and Japan), interdisciplinary (fields ranging from philosophy to technology), and interactive (between human beings and machines, technology, and society).

The first speaker, Prof. Laurence Devillers from Sorbonne University gave a lecture titled “Affective and Social Dimensions in Spoken Interaction: Technological and Ethical Issues.” Prof. Devillers, whose background is on machine learning and speech recognition, indicated the importance of “emotional interaction” between humans and machines in the field of elderly care, and introduced studies on systems and devices that recognize, interpret, process, and simulate human influence. Meanwhile, research on emotions is accompanied by social and ethical challenges of inducing and manipulating human behavior. Therefore, Prof. Devillers leads the IEEE Standardization Working Group “Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems” (IEEE-SA P7008 WG), discussing the difference between ethical nudging and aggressive nudging, and the kind of transparency and traceability required in building nudging systems.

Assistant Professor Hirotaka Osawa from University of Tsukuba gave a lecture titled “Human-Agent Interaction as Substitution of Emotional Labor.” Human-Agent interaction is a research field that investigates human reactions toward agents and designs the relationship between human beings and machines. For example, some research reveals that an agent via two users can elicit users’ feelings of jealousy. This research suggests that the agent system can influence relationship between humans. On the contrary, he also displays a research that reduces human beings’ emotional labor by human-agent interaction technology. In addition, Prof. Osawa introduced the “AI Wolf project” that makes agents deceive players in a werewolf/mafia game. He indicated that these studies on deceiving machines should proceed by considering the social norms of our society.

Next, Elena Seredkina, a professor at Perm Polytechnic University reported the results of a questionnaire survey that was conducted in Russia. In the questionnaire survey, which was based on a survey conducted by the research group AIR in Japan, various stakeholder groups (engineers, humanities and social science researchers, master students, science fiction writers etc.) were asked “to what extent would you rely on machines doing tasks” such as driving a car or nursing care. Comparative analysis of Japanese and Russian results indicates that both countries have similar trends despite specific national features. For example, social norms that support family members taking care of the elderly is ingrained in Russia. Cultural context is also important. Russian society in the field of shaping technology policy is split into two sides: post-soviet technological enthusiasm is faced with implicit technophobia from Orthodox Christians and Elderly people.

Finally, in his lecture titled “What is Body-Conservatism,” Kojiro Honda, Associate Professor at Kanazawa Medical University presented the fundamental question, “Why do human beings pursue humanoid robots?” Currently, humanoid robots that imitate human body functions are embedded as a substitute for human organs. “Transhumanism Declaration” claims to accelerate human evolution in this way. Transhumanists would insist that people should respect their own right of “Morphological Freedom” that is a freedom from interference from others on freely modifying their own bodies. Contrary to this claim, Prof. Honda advocated the position of “body-conservatism” that restricts excessive body remodeling. As one of the foundations to this position, he considers that body remodeling not only enhances our body functions but also raises concerns that the world could be divided because people cannot share the same worldview before and after remodeling.

In closing, Prof. Hideaki Shiroyama of the University of Tokyo highlighted the importance of considering not only the “appropriate distance between humans and machines” but also the “appropriate distance between humans and humans.” This event discussed how machines intervene in relationships between humans, and how ethical, social, and cultural aspects are important while considering technology design. To expand the discussion, continuous international and interdisciplinary deliberations are essential.

[Written by Arisa Ema]


[Event] Privacy Protection, GDPR and Free Trade: An Appraisal of the EU-Japan Relations

In May 2018, General Data Protection Regulation (GDPR) is due to come into force. By GDPR, how will privacy protection and free trade between EU and Japan change? We invite Prof. Vigjilenca Abazi, an Emile Nöel Fellow at NYU School of Law and Assistant Professor of European law at Maastricht University, the Netherlands to understand implications for data privacy and free trade.

Date: July 2nd, 2018 (Mon) 13:00-14:30
Venue: The University of Tokyo, Ito International Research Center, Seminar room 3rd floor 【Map
Lecture: Vigjilenca Abazi, Emile Nöel Fellow at NYU School of Law and Assistant Professor of European law at Maastricht University, the Netherlands
Discussant: Takayuki Matsuo, Attorney at Law, Momo-o Matsuo & Namba, Lecturer (Part Time) at Keio University
Moderator: Arisa Ema, Policy Alternative Research Institute, The University of Tokyo
Capacity: 50 seats
Fee: Free
Language: English
Application form: Register here
Organizer: Policy Alternatives Research Institute
Co-Organizer AIR,  Beneficial AI JapanSTIG (Science, technology, and Innovation Governance) Education and Research Unit

Privacy Protection, GDPR and Free Trade: An Appraisal of the EU-Japan Relations

Negotiations for the world’s largest free trade agreement were recently concluded by Japan and the European Union. As of March 2018, Japan is also a signatory of the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), joining 11 countries after the US withdrew from the original Trans-Pacific Partnership (TPP). These trade agreements aim to mark a ‘new era’ in cooperation and have a global impact through trade partnerships. Yet, another key aspect of these agreements, although somewhat hidden from public view and debate, is the role of free data flows and data privacy. On the European side, in January 2018 the European Commission announced that it would endorse provisions for data flows and data protection in EU trade agreements. Furthermore, as of 25 May 2018, the General Data Protection Regulation (GDPR) will enter into force in the EU, having direct implications for businesses globally, including Japan. The EU and Japan, in a joint statement, stressed the importance of ensuring a high level of data privacy, acknowledging it as a fundamental right and as a central factor of consumer trust in the digital economy. Experts have argued that the EU would use the GDPR as a condition for free data flows, which would in turn imply complementing the existing adequacy assessment procedures and promoting the GDPR as the global standard. Hence, questions arise as to whether these legal regimes collide and what are the implications for the EU-Japan partnership? Will European privacy rules indeed dominate and is the economic partnership with Japan a first step towards a larger trend in this direction? This lecture addresses these issues and questions with the focus on the EU-Japan relations, particularly through the lens of data privacy and implications for free trade. Mapping the recent developments, the applicable legal regimes as well as noting the key features of the GDPR, the lecture aims to provide an understanding of the issues and dynamics at stake relevant both for privacy and free trade in the EU and Japan relations.


Bio of Vigjilenca Abazi, PhD

Vigjilenca Abazi is an Emile Nöel Fellow at NYU School of Law and Assistant Professor of European law at Maastricht University, the Netherlands. Dr Abazi obtained her PhD degree at University of Amsterdam and was a Fulbright Scholar at Columbia Law School. She has written extensively on issues of secrecy, whistleblower protection and privacy in the European context, including a monograph published by Oxford University Press (forthcoming, 2018). On issues of whistleblower and privacy protection, Vigjilenca has advised European Union institutions and the Council of Europe, including drafting a legislative bill on protecting whistleblowers in the European Union. She is a Board member in many leading European academic journals and has more than twenty scientific publications.  Upon invitation, Dr Abazi has given numerous lectures including at Harvard Law School, Oxford University, and Columbia University. She has also been invited to present her research at the European Parliament and the Dutch Parliament.

Policy Alternatives Research Institute, The University of Tokyo



[Event] Appropriate distance between humans and machines: Robot, AI, and Enhancement

The relationship between people, robots, and artificial intelligence is getting increasingly complicated.
At the same time, this poses a strong question: what is intelligence with humans?
Can people and machines have emotional bonds?
Do machines extend human functions?
And how researchers’ responsibilities and innovation should be different among countries?
This event, France, Russia, and Japanese researchers will introduce the relationship between people and machines and various examples of
research trends.

■Date:May 12, 2018 (Sat)  10:00-12:30 (Open 9:30)
■Venue:Seihoku Gallery, The University of Tokyo【Map】
10:00-10:05    Opening remarks:Arisa Ema(The University of Tokyo, Japan)

10:05-10:40    Lecture 1:Laurence Devillers (Paris-Sorbonne University, France)
  Affective and Social Dimensions in Spoken Interaction: Technological and Ethical Issues

L. Devillers is a Full Professor of Computer Science at Paris-Sorbonne University and she leads a team of research on ‘Affective and Social Dimensions of Spoken Interactions’ at the CNRS. Her background is on machine learning, speech recognition, spoken dialog system and evaluation. Since 2001, she is working on affective computing and participates in BPI ROMEO then ROMEO2 project, which has the main goal of building a social humanoid robot. She leads the European CHIST-ERA project JOKER: JOKe and Empathy of a Robot. She has (co-) authored more than 150 publications (h-index=36). She is a member of AAAC (board), IEEE, ACL, ISCA and AFCP. She is involved in the Eurobotics Topic Groups: “Natural Interaction with Social Robots” and “Socially intelligent robots”. She is member of the working group on the ethics of the research in robotics (CERNA) and heads the Machine Learning/AI and Ethics WG. She is also involved in the Affective Computing Committee of the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems (2016) and leads the working group P7008 on nudging.


10:40-11:15    Lecture 2:Hirotaka Osawa (University of Tsukuba, Japan)
   Human-Agent Interaction as Substitution of Emotional Labor

Dr. Hirotaka Osawa is an assistant professor in University of Tsukuba. His research field is in human-agent interaction, particularly anthropomorphization of an object. His own research focuses on how human-like appearance and attitude improves interaction between a user and machines. He wants to create universal users interface experiences using our innate response to the world. His research interest is an improvement of today’s complex household appliances. Dr. Osawa received his PhD in Engineering, MS and BS in Computer Science from Keio University.


11:15-11:50    Lecture 3:Elena Seredkina (Perm National Research Polytechnic University, Russia)
  Stakeholder Opinion Survey in Japan and Russia on AI/robots: Global and national perspectives

Dr. Elena Seredkina gained her PhD degree at Saint Petersburg State University in the field of History of Philosophy (2005). She has been working on the philosophy of science and technology at Philosophy and Law Department of Perm National Research Polytechnic University since 2003. Head of the Youth Department of the Union of Artificial Intelligence at the Russian Academy of Sciences in Moscow (2006-2007). Being involved in the research of Technology assessment as a Head of the research laboratory RRI_Lab (since 2014) at the global and national level she has been one of the main leaders on this topic in Russia and constantly works in an interdisciplinary team on the TA/RRI projects. Her research focuses on philosophical and methodological problems of technology, national models of technology assessment and “participatory turn” as a RRI approach.


11:50-12:25    Lecture 4:Kojiro Honda (Kanazawa Medical University)
  What is Body-conservatism 

He started his academic career as an assistant professor at Kanazawa Institute of Technology, where he worked toward a launch of engineering ethics course as a member of Applied Ethics Center for Engineering and Science in 2004-2006. Then he moved to Doshisha University, where he worked toward a launch of academic writing course in 2007-2010. Then he studied history of Japanese science policy as a research coordinator of ITEC (Institute of Technology, Enterprise, and Competitiveness in Doshisha Business School) in 2011. From 2012 he has taught medical ethics at Kanazawa Medical University and from 2016 he has been an associate professor there. His main subject of research is philosophy of technology, especially philosophy and ethics of “Transhumanism”. He was one of foundation members of Society for Applied Philosophy of Robotics in 2011.

12:25-12:30   Closing remarks:Hideaki Shiroyama (The University of Tokyo)

■Capacity:50 seats
■Fee: Free
■Lauguage: English
■Application form: Resistrer
■Organizer: Policy Alternatives Research Institute
■Co-Organizer AIR,  Beneficial AI JapanNext Generation Artificial Intelligence Research Center

Policy Alternatives Research Institute, The University of Tokyo




Regional Reports on AI Ethics: JAPAN

Reprinted from EADv2 Regional Reports on A/IS Ethics: JAPAN, in Regional Reports on A/IS Ethics, December, 12, 2017, pp. 2-10

Discussions on artificial intelligence and ethics/society are carried out within and between various academic, government, and NPO/Network institutions in Japan. Each organization has its own focus, such as: “AI and professionalism,” “AI/robot and law,” “AI network system,” “Alife,” “Deep learning,” “Whole brain architecture” and etc. These are the organizations/institutions that lead the discussion on AI Ethics, while many other communities remain active.

Most reports introduced below can be read in English, and the summary is extracted from those reports. Therefore, if the readers of this report want to search for further information, they are welcome to visit the relevant websites and read the original documents.



The Ethics committee of the Japanese Society for Artificial Intelligence (JSAI)

The committee consists of 9 members and 3 observers, most of which are AI researchers, but the group also includes a Science Fiction writer, a STS researcher, and a journalist. Since its establishment in 2014, the Ethics Committee of the Japanese Society for Artificial Intelligence (JSAI) has been exploring the relationship between artificial intelligence research/technology and society. In February 2017, it released the “Japanese Society for Artificial Intelligence Ethical Guidelines”[1], prioritizing “contribution to humanity (article 1)” as its most important objective.

The Guidelines were firstly created as a Code of Ethics, and as such include professional ethical guidance such as “act with integrity (article 6)” and “communication with society and self-development (article 8).” Above all, article 9, the “abidance of ethics guidelines by AI” emphasizes the reflexive nature of the Guidelines, and reflects its unique characteristics. The Guidelines are not intended to come into practice immediately, but are meant to promote various questions to deepen discussions between researchers and society, and to formulate a conversation that would lead to the efficient use of artificially intelligent technology in society.

The ethics committee held an open discussion in June 2017, inviting Danit Gal, chair of the Outreach Committee at the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to participate in the discussion[2]. Video messages from John C. Havens, executive director of IEEE Global Initiative introducing “Ethically Aligned Design version 1”, and Richard Mallah, director of AI Projects at the Future of Life Institute, introducing the “Asilomar AI Principles” were also displayed at the event. The committee confirmed its intention to hold future collaborations with the IEEE and Future of Life Institute. JSAI also now collaborates with AI Initiatives organized by The Future Society and encouraging Japanese people to join the AI Initiative[3].


JSAI Ethical Guidelines

  1. Contribution to humanity
  2. Abidance of laws and regulations
  3. Respect for the privacy of others
  4. Fairness
  5. Security
  6. Act with integrity
  7. Accountability and social responsibility
  8. Communication with society and self-development
  9. Abidance of ethics guidelines by AI


RIKEN Center for AIP

The RIKEN Center for Advanced Intelligence Project[4] was launched in April 2016 under the Ministry of Education, Culture, Sports, Science and Technology, with an operational scope including the “Advanced Integrated Intelligence Platform Project (AIP)”, which focuses on Artificial Intelligence, Big Data, Internet of Things, and Cybersecurity”.  The center aims to achieve scientific breakthrough and contribute to the welfare of society and humanity through developing innovative technologies. It also conducts research on ethical, legal and social issues brought about by the spread of AI technology and development of human resources[5].

With the Center, the Artificial Intelligence in Society Research Group, consisting of 8 teams (Oct 2017), deals with matters such as privacy and social systems, artificial intelligence ethics and society, and information law.


Robot Law study group, The Information Network Law Association

The study group was established in 2016 as a subcommittee of the information Network Law Association, and discusses the legal issues around realizing a society where human beings and robots coexists[6].



The Conference toward AI Network Society, Institute for Information and Communications Policy (IICP), the Ministry of Internal Affairs and Communications (MIC)

IICP is one of the institutes of the Ministry of Internal Affairs and Communications aimed at promoting basic research activities of information and communications policy. In 2015, IICP released a report titled “Study Group concerning the Vision of the Future Society Brought by Accelerated Advancement of Intelligence in ICT.” This study group consists of 12 stakeholders from industry and academia in a wide range of fields. The report discusses: (1) How the changes in human society will alter the relationship between humans and machines as well as the relationship among humans, and hereby what kind of changes will occur in human society? (2) What should humankind do in order to utilize the newly emerging technologies and systems (as precisely defined below, “Intelligent ICT”) well now[7].

In 2016, the “Conference on Networking among AIs” was organized with about 37 participants, mainly consisting of academic researchers in a wide range of fields. The concept of “AI Networking” indicates networking among AI systems. It also generated the concept of “Wisdom Network Society” as a desirable society to be built. The draft of AI R&D Guidelines released as a result of the conference consists of 8 articles. The Final Report was released in 2016 and the draft of Guidelines were presented at G7 ICT Ministers’ Meeting in Takamatsu, Kagawa. This led to the formulation of the G7/8 ICT Ministers Meetings’ statement in Turin, Italy[8].

In 2017, the “Conference toward AI Network Society” was organized with about 33 participants. The conference has two subcommittees: the committee on AI R&D Principles (37 people) and the committee on Impact and Risk Assessment (34 people)[9]. The members of the conference and subcommittees are from academia, industry, and civil society. They released their report on July 2017, which consisted of the draft of AI R&D guidelines with 9 principles for international discussions. It also includes an impact assessment of AI network society, which is based on case study scenarios of mobility, education, healthcare, etc.

The draft of guidelines aims to protect the interests of users and deter the spread of risks, thus realizing a human-centered “Wisdom Network Society” by way of increasing the benefits and mitigating the risks of AI systems through the sound progress or AI networks. The first principle is the “principle of collaboration”. It concerns the development of AI networking and the promotion of the benefit of AI systems. Principles 2 to 7 mainly deal with the mitigation of risks associated with AI systems such as the “principle of transparency,” “principle of controllability,” and “principle of privacy.” Principles 8 and 9 emphasize improvements in acceptance by users[10]. The MIC also held an International Forum toward AI network Society on March 13 and 14, 2017[11]. As for the next steps, the MIC are considering the creation of a “Utilization Guidelines” draft.


AI R&D Principles

(Principles mainly concerning the sound development of AI networking and the promotion of the benefits of AI systems)

  1. Principle of collaboration―Developers should pay attention to the interconnectivity and interoperability of AI systems.(Principles mainly concerning the mitigation of risks associated with AI systems)
  2. Principle of transparency―Developers should pay attention to the verifiability of inputs/outputs of AI systems and the explainability of their judgments.
  3. Principle of controllability―Developers should pay attention to the controllability of AI systems.
  4. Principle of safety―Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
  5. Principle of security―Developers should pay attention to the security of AI systems.
  6. Principle of privacy―Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties.
  7. Principle of ethics―Developers should respect human dignity and individual autonomy in R&D of AI systems.(Principles mainly concerning improvements in acceptance by users et al.)
  8. Principle of user assistance―Developers should take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners.
  9. Principle of accountability―Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users.


The Advisory Board on Artificial Intelligence and Human Society, the Cabinet Office

The Japanese government introduced a new concept, “society 5.0,” in its 5th Science and Technology Basic Plan (2016–2020), as a way of guiding and mobilizing action in science, technology, and innovation to achieve a prosperous, sustainable, and inclusive future. This future is within the context of the ever-growing digitalization and connectivity and is empowered by the advancement of AI. On the one hand, AI technologies are expected to bring tremendous benefits to human society. On the other hand, they raise Ethical, Legal, and Social Implications (ELSI). In May 2016, the Advisory Board on Artificial Intelligence and Human Society was set up under the initiative of the Minister of State for Science and Technology Policy. It was established with the aim of assessing different societal issues that could be raised by the development and deployment of AI, and to discuss its implications for society[12]. The advisory board consists of 12 members with various backgrounds in fields such as engineering, philosophy, law, economics, and social sciences. The final report on Artificial Intelligence and Human Society was published on March 24th, 2017[13].

The Advisory Board focused on realistic and significant examples that are current or foreseeable for the near future. The Board’s objective was to clarify what benefits are expected, what issues are to be considered, what issues are to be resolved, and what attitudes are beneficial. Digitalization processes that cannot be dissociated from AI technologies were included in the discussions. Given that AI technologies are being applied in various fields, the Advisory Board took a case-based approach that dealt with various cases in the four representative categories: mobility, manufacturing, personal services, and conversation/communication. The Advisory Board aimed to clarify common key issues around AI technologies in these four categories from 6 points of view: ethical, legal, economic, educational, social, and research and development issues. A matrix was created in which the columns indicated four categories and the rows represented the six points of view[14].


Strategic Council for AI Technology

The Council was established with under the instructions issued by the Prime Minister in the “Public-Private Dialogue towards Investment for the Future” in 2016. Their report was published in March 2017[15].

The Council, acting as a control tower, has come to manage five National Research and Development Agencies that fall under the jurisdiction of the Ministry of Internal Affairs and Communications, Ministry of Education, Culture, Sports, Science and Technology, and Ministry of Economy, Trade and Industry. In addition to promoting the research and development of AI technology, the Council coordinates with industries related to the ones that utilize AI (so-called “exit industries”), and is moving forward with the social implementation of AI technology. The Council aims to create R&D development goals and roadmap for industrialization. The roadmap focuses priority areas such as “productivity” “health, medical care, and welfare” and “mobility.” It also emphasizes the importance of coordination with the three research centers:

  1. Center for Information and Neural Networks (CiNet) and Universal Communication Research Institute (UCRI) of the National Institute of Information and Communications Technology (NICT)
  2. RIKEN Center for Advanced Intelligence Project (AIP) of the Institute of Physical and Chemical Research (RIKEN)
  3. Artificial Intelligence Research Center (AIRC) of the National Institute of Advanced Industrial Science and Technology (AIST)


AI and Society

A 2-day Symposium (10-11 October) on AI and society organized by the Next Generation Artificial Intelligence Research Center of the University of Tokyo took place in Tokyo this fall[16]. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is listed as a partner. The symposium invited speakers from industry and academia to exchange ideas about the practical applications of AI technologies and possible future developments. It aimed to create an opportunity to hold an international discourse on social impacts arising from new AI technologies within Japanese industry and academia[17].

On 12th October, the event Beneficial AI Tokyo was held to discuss effective ways to build cooperation for beneficial AI. The conference was organized by The Leverhulme Centre for Future of Intelligence (CFI), The Centre for the Study of Existential Risk (CSER), Next Generation Artificial Intelligence Research Center of the University of Tokyo (AI Center), and Araya, Inc.



ALIFE Lab. is a platform for accelerating the co-creation between ALife (Artificial Life) scientists and those who are in other “creative” fields such as art, games, design, music, and fashion. Founded on the basis of the Japanese Society for Artificial Intelligence and the International Society for Artificial Life, it aims to establish a community to discuss the future of the society, from anthropocene to AI/ALIFE-pocene[18]. The ALIFE Lab. sits on the organizing committee of the 2018 Conference of Artificial Life (ALIFE 2018), alongside the hybrid of the European Conference on Artificial Life (ECAL), and the International Conference on the Synthesis and Simulation of Living Systems (ALife). The conference will be held in Japan[19].


Japan Deep Learning Association (JDLA)

Established in June 2017, the association aims to enhance the competitiveness of the Japanese industry with the help of Deep Learning-based innovative technologies[20]. By introducing Deep Learning certification systems and guaranteeing that individuals who receive the certificate meet necessary requirements, the Association will contribute to the development of human resources able to apply Deep Learning technologies correctly. It also aims to contribute to other organizations in creating guidelines, considering the ethical aspects of deep learning technologies, and encouraging effective communications with society.


The Whole Brain Architecture Initiative (WBAI)

A nonprofit organization, the Whole Brain Architecture Initiative was established August 2015, with the mission of creating and engineering a human-like artificial general intelligence (AGI) by learning from the architecture of the entire brain[21]. It presents the prospects of the development process[22] and supports the development of AGI by open R&D communities contributing to society on a long-term basis[23].



[1] The creation and aims of guidelines are introduced in the web in English (, and the guideline can be read in English ( and in Korean (인공지능학회-윤리지침-20170303-KoNIBP.pdf).

[2] Report ( and the summary report ( in English available.

[3] AI Initiative has Japanese page ( and the ethics committee opens a special website introducing the director Cyrus Hodes’s message (




[7] English abstract is available ( and Japanese website available (


[9] Referred the list of the report (

[10] The English report available ( and whole Japanese report available (


[12] The final report in English is available (

[13] Report in Japanese and appendix are available (

[14] Matrix for deriving common issues across cases in English is available (

[15] Report in English available (, Japanese website (










Written by: Arisa EMA
Assistant Professor, The University of Tokyo
Visiting Researcher, RIKEN AIP Center


IEEE Ethically Aligned Design version2 Workshop Series

The IEEE Ethically Aligned Design (EAD) document has been created by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. It aims to move beyond the excessive fear and expectations associated with AI and increase innovation by creating ethically aligned designed AI.

The goal of our workshops is to understand the content of EAD version 2 and create a network among AI/IT researchers, social science and humanities researchers, industries, policy makers and other stakeholders. We also send the feedback from our discussions to the IEEE Global Initiative.

Go to website for detail information.


Beneficial AI Tokyo Report

Beneficial AI Tokyo was held in association with a symposium on AI and Society on October 12, 2017 to explore this challenge, with a particular focus on Japan.

The participating academic groups and NGOs, corporate groups, and others presented their thoughts on effective ways to build cooperation for beneficial AI, and the participants discussed topics such as:

  1. Corporate voices for beneficial AI – What can corporations do to encourage cooperation on the challenges of ensuring that AI is beneficial?
  2. Basic research and beneficial AI – What can scientists doing basic research on AI and related subjects do to help ensure that AI is beneficial?
  3. Near-term choice points for long-term risks and benefits – Are there choices about the direction of development of AI in the near-term which may have long-term consequences?
  4. Building Japan’s beneficial AI community – What are the next steps to encourage cooperation for beneficial AI in Japan?
  5. Working across borders for beneficial AI – How can we encourage international cooperation and collaboration for beneficial AI?
  6. Technical AI safety – What are the technical challenges of ensuring that AI is safe and reliable and aligned with human values?
  7. Technical issues of bias and privacy – How can we ensure fairness and respect for privacy in the application of machine learning for making decisions about people?
  8. Regulation for beneficial AI? – Do we need regulation for the development of AI? If so, should it be at a national or international level?
  9. AI ethics: who gets a voice? – AI is likely to affect the lives of everybody; who decides how it should be applied?
  10. Avoiding an AI arms race – Are there dangers if the development becomes an ‘arms race’ between global powers? If so, how can they be avoided?
  11. Visions of a positive AI future – If the development of AI goes well over the next 50+ years, what sort of world will our grandchildren be living in? What are the long-term benefits?
  12. AI for the bottom billion – The world’s poorest people often have least access to new technologies. What can be done to ensure that they get the benefits of AI?
  13. Community-building for beneficial AI – What can be done at a grass-roots level to encourage widespread interest and involvement in the challenges of beneficial AI?


Tokyo statement in Chinese

合作开发对人类有益的人工智能— 东京宣言





合作開發對人類有益的人工智能 — 東京宣言