Presentation Sensei

Presentation Sensei: A Presentation Training System using Speech and Image Processing

  1. Kazutaka Kurihara
  2. Masataka Goto
  3. Jun Ogata
  4. Yosuke Matsusaka
  5. Takeo Igarashi



In this paper we present a presentation training system that observes a presentation rehearsal and provides the speaker with recommendations for improving the delivery of the presentation, such as to speak more slowly and to look at the audience. Our system “Presentation Sensei” is equipped with a microphone and camera to analyze a presentation by combining speech and image processing techniques. Based on the results of the analysis, the system gives the speaker instant feedback with respect to the speaking rate, eye contact with the audience, and timing. It also alerts the speaker when some of these indices exceed predefined warning thresholds. After the presentation, the system generates visual summaries of the analysis results for the speaker’s selfexaminations. Our goal is not to improve the content on a semantic level, but to improve the delivery of it by reducing inappropriate basic behavior patterns. We asked a few test users to try the system and they found it very useful for improving their presentations. We also compared the system’s output with the observations of a human evaluator. The result shows that the system successfully detected some inappropriate behavior. The contribution of this work is to introduce a practical recognitionbased human training system and to show its feasibility despite the limitations of state-of-the-art speech and video recognition technologies.


  • Kazutaka Kurihara, Masataka Goto, Jun Ogata, Yosuke Matsusaka, and Takeo Igarashi, "Presentation Sensei: A Presentation Training System using Speech and Image Processing," Proc. of ACM ICMI International Conference on Multimodal Interfaces, pp.358-365, 2007.PDF

© Kazutaka Kurihara