AI-Josyu: Thinking Support System in Class by Real-time Speech Recognition and Keyword Extraction

  • Kyohei Matsumoto GLOCOM, International University of Japan / University of Tsukuba, Japan
  • Takafumi Nakanishi GLOCOM, International University of Japan / Musashino University, Japan
  • Toshitada Sakawa Sakawa Co., Ltd, Japan
  • Kengo Onodera TerraLin Create Co., Ltd, Japan
  • Shinichiro Orimo Hirayama Elementary School, Japan
  • Hiroyuki Kobayashi Hirayama Elementary School, Japan
Keywords: thinking support system, class support, interconnection, media-driven, content management

Abstract

In this paper, we present a thinking support system, AI-Josyu. This system also operates as a class support system which helps to teachers for lightening their work. AI-Josyu is implemented based on media-driven real-time content management framework. The system links real world media and legacy media contents together. In resent years, it is easier to collect a large amount of various kinds of data which are created with sensors in the real world. The system realizes interconnection and utilization of legacy media contents. The legacy media contents are generated and scattered on the Internet. The framework has four modules, which are called “acquisition,†“extraction,†“selection,†and “retrieval.†The real world media and the legacy media contents are interconnected by these modules. This interconnection includes semantic components. This system records teacher's voice of its lecture in real time and presents retrieved legacy media contents corresponding to subject of the lecture. By this presentation, preparing of the legacy contents is not required. This system automatically retrieves and shows the legacy media contents. This system helps students to understand contents of the lecture. In addition, the system attends to expansion of ideas. We constructed the system and conducted the demonstration in class. It shows that the system is helpful to teacher and students for expansion of thinking.

Downloads

Download data is not yet available.

References

Ossama Abdel-Hamid, Abdel-rahman Mohamed, Hui Jiang, Li Deng, Gerald Penn, and Dong Yu, Convolutional Neural Networks for Speech Recognition, IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol. 22, No. 10, pp. 1533-1545, 2014.

Takashi Kitagawa, Takafumi Nakanishi, and Yasushi Kiyoki, An implementation method of automatic metadata extraction method for music data and its application to semantic associative search, Systems and Computers in Japan, Vol. 35(6), pp. 59-78, 2004.

Takashi Kitagawa, Takafumi Nakanishi, and Yasushi Kiyoki, An Implementation Method of Automatic Metadata Extraction Method for Image Data and its Application to a Semantic Associative Search, Information Processing Society of Japan Transactions on Databases, Vol.43(SIG12) (TOD16), pp. 38-51, 2002.

Takafumi Nakanishi, Ryotaro Okada, and Takashi Kitagawa, Automatic Media Content Creation System According to an Impression by Recognition-Creation Operators, Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), pp. 1-6, 2016.

Takafumi Nakanishi, Kyohei Matsumoto, Toshitada Sakawa, Kengo Onodera, Shinichiro Orimo, and Hiroyuki Kobayashi, Media-driven Real-time Content Manegement Framework and its Application to In-Class Thinking Support System, Proceedings of the 2018 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC), Surabaya, East Java, Indonesia, pp. 274-279, 2018.

Keith Bain, Sara Basson, Alexander Faisman, and Dimitri Kanevsky, Accessibility, transcription, and access everywhere, IBM Systems Journal, Vol.44, No.3, pp.589-604, 2005.

Vivien Arief Wardhany, Sritrusta Sukaridhoto, and Amang Sudarsono, Indonesian Automatic Speech Recognition For Command Speech Controller Multimedia Player, EMITTER International Journal of Engineering Technology, Vol.2, No.2, pp39-48, 2014.

Rohit Ranchal, Teresa Taber-Doughty, Yiren Guo, Keith Bain, Heather Martin, J. Paul Robinson, and Bradley S. Duerstock, Using speech recognition for real-time captioning and lecture transcription in the classroom, IEEE Transactions on Learning Technologies, Vol.6, No.4, pp. 299-311, 2013.

Joseph Robison, and Carl Jensema, Computer Speech Recognition as an Assistive Device for Deaf and Hard of Hearing People, Challenge of Change: Beyond the Horizon, Proceedings of Seventh Biennial Conference Postsecondary Education Persons Who Are Deaf or Hard of Hearing, 1996.

Sandy Watson, and Linda Johnston, Assistive Technology in the Inclusive Science Classroom: Devices and services can help science students with a wide variety of needs, The Science Teacher, Vol. 74, No. 3, pp. 34-38, 2007.

Mike Wald, Captioning for Deaf and Hard of Hearing People by Editing Automatic Speech Recognition in Real Time, Proceedings of 10th International Conference on Computers Helping People with Special Needs ICCHP 2006, pp. 683-690, 2006.

Petr Červa, Jan Silovský, Jindrich Zdánský, J. Nouza, and Jiří Málek, Real-time lecture transcription using ASR for Czech hearing impaired or deaf students, Proceedings of INTERSPEECH 2012, 13th Annual Conference of the ISCA International Speech Communication Association, Portland, OR, USA, 2012.

Tatsuya Kawahara, Norihiro Katsumaru, Yuya Akita, and Shinsuke Mori, Classroom Note-taking System for Hearing Impaired Students using Automatic Speech Recognition Adapted to Lectures, Proceedings of INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, pp. 626-629, 2010.

Margaret Martyn, Clickers in the Classroom: An Active Learning Approach, EDUCAUSE Quarterly Magazine, Vol. 30, No. 2, pp.71-74, 2007.

April R. Trees, and Michele H. Jackson, The learning environment in clicker classrooms: student processes of learning and involvement in large university-level courses using student response systems, Learning, Media and Technology, Vol. 32, No.1, pp. 21–40, 2007.

Motoki Yokoyama, Yasushi Kiyoki, and Tetsuya Mita, A Similarity-Ranking Method on Semantic Computing for Providing Information-Services in Station-Concierge System, EMITTER International Journal of Engineering Technology, Vol.5, No.1, pp16-35, 2017.

Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig, Linguistic Regularities in Continuous Space Word Representations, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2013), pp. 746-751, 2013.

Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean, Efficient Estimation of Word Representations in Vector Space, Computing Research Repository (CoRR), 2013.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, Distributed Representations of Words and Phrases and their Compositionality, Advances in Neural Information Processing Systems 26, pp. 3111-3119, Curran Associates, Inc. (New York), 2013.

Published
2019-06-15
How to Cite
Matsumoto, K., Nakanishi, T., Sakawa, T., Onodera, K., Orimo, S., & Kobayashi, H. (2019). AI-Josyu: Thinking Support System in Class by Real-time Speech Recognition and Keyword Extraction. EMITTER International Journal of Engineering Technology, 7(1), 366-383. https://doi.org/10.24003/emitter.v7i1.373
Section
Articles