The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Читать онлайн книгу.

The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt


Скачать книгу

      Figure 8.5 (right) Video courtesy of Cosmin Munteanu & Albert Ali Salah. Used with permission.

      Figure 8.6 From: C. G. Pires, F. Pinto, V. D. Teixeira, J. Freitas, and M. S. Dias. 2012. Living home center–a personal assistant with multimodal interaction for elderly and mobility impaired e-inclusion. Computational Processing of the Portuguese Language: 10th International Conference, PROPOR 2012, Coimbra, Portugal, April 17–20, 2012, Proceedings. Copyright © 2012 Springer-Verlag Berlin Heidelberg. Used with permission.

      Figure 8.7 From: Morency, L.P., Stratou, G., DeVault, D., Hartholt, A., Lhommet, M., Lucas, G.M., Morbini, F., Georgila, K., Scherer, S., Gratch, J. and Marsella, S., 2015, January. SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications. AAAI Conference on Artificial Intelligence, (pp. 4307–4308). Copyright © 2015 AAAI Press. Used with permission.

      Figure 8.7 (video) Courtesy of USC Institute for Creative Technologies. Principal Investigators: Albert (Skip) Rizzo and Louis-Philippe Morency.

      Figure 8.8 From: F. Ferreira, N. Almeida, A. F. Rosa, A. Oliveira, J. Casimiro, S. Silva, and A. Teixeira. 2014. Elderly centered design for interaction–the case of the s4s medication assistant. Procedia Computer Science, 27: 398–408. Copyright © 2014 Elsevier. Used with permission.

      Figure 8.9 Courtesy of Jocelyn Ford.

      Figure 8.11 Courtesy of © Toyota Motor Sales, U.S.A., Inc.

      Figure 8.12 Courtesy of © 2016 ANSA.

      Figure 8.12 (video) Courtesy of Robot-Era Project, The BioRobotics Institute, Scuola Superiore Sant’Anna, Italy.

      Figure 8.13 From: R. Shilkrot, J. Huber, W. Meng Ee, P. Maes, and S. C. Nanayakkara. 2015. Fingerreader: a wearable device to explore printed text on the go. In ACM Transactions on Computer-Human Interaction, pp. 2363–2372. Copyright © 2015 ACM. Used with permission.

      Figure 8.14 Adapted from: B. Görer, A. A. Salah, and H. L. Akin. 2016. An autonomous robotic exercise tutor for elderly people. Autonomous Robots.

      Figure 8.14 (video) Courtesy of Binnur Görer.

      Figure 8.15 From: B. Görer, A. A. Salah, and H. L. Akin. 2016. An autonomous robotic exercise tutor for elderly people. Autonomous Robots. Copyright © 2016 Springer Science+Business Media New York. Used with permission.

      Figure 9.3 From: P. Qvarfordt and S. Zhai. 2009. Gaze-aided human-computer and human-human dialogue. In B. Whitworth and A. de Moo, eds., Handbook of Research on Socio-Technical Design and Social Networking Systems, chapter 35, pp. 529–543. Copyright © 2009 IGI Global. Reprinted by permission of the copyright holder.

      Figure 9.4 From: P. Qvarfordt and S. Zhai. 2009. Gaze-aided human-computer and humanhuman dialogue. In B. Whitworth and A. de Moo, eds., Handbook of Research on Socio-Technical Design and Social Networking Systems, chapter 35, pp. 529–543. Copyright © 2009 IGI Global. Reprinted by permission of the copyright holder.

      Figure 9.7 From: P. Qvarfordt and S. Zhai. 2005. Conversing with the user based on eye-gaze patterns. In Proc. of the SIGCHI Conf. on Human Factors in Computing Systems (CHI ’05), pp. 221–230. Copyright © 2005 ACM. Used with permission.

      Figure 10.1 From: S. Oviatt, R. Lunsford, and R. Coulston. 2005. Individual differences in multimodal integration patterns: What are they and why do they exist? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 241–249. Copyright © 2005 ACM. Used with permission.

      Figure 10.5 (left) From: P. R. Cohen, M. Johnston, D. R. McGee, S. L. Oviatt, J. Pittman, I. Smith, L. Chen, and J. Clow. 1997. QuickSet: Multimodal interaction for distributed applications. In Proceedings of the Fifth ACM International Conference on Multimedia, pp. 31–40. Copyright © 1997 ACM. Used with permission.

      Figure 10.6 Video courtesy of Phil Cohen. Used with permission.

      Figure 10.7 From: P. R. Cohen, D. R. McGee, and J. Clow. 2000. The efficiency of multimodal interaction for a map-based task. In Proceedings of the Sixth Conference on Applied Natural Language Processing, Association for Computational Linguistics, pp. 331–338. Copyright © 2000 Association for Computational Linguistics. Used with permission.

      Figure 10.7 (video) Video courtesy of Phil Cohen. Used with permission.

      Figure 10.8 Video courtesy of Phil Cohen. Used with permission.

      Figure 10.9 From: D. R. McGee, P. R. Cohen, M. Wesson, and S. Horman. 2002. Comparing paper and tangible, multimodal tools. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 407–414. Copyright © 2002 ACM. Used with permission.

      Figure 10.10 Video courtesy of Phil Cohen. Used with permission.

      Figure 10.11 From: P. Ehlen and M. Johnston. 2012. Multimodal interaction patterns in mobile local search. In Proceedings of ACM Conference on Intelligent User Interfaces, pp. 21–24. Copyright © 2012 ACM. Used with permission.

      Figure 10.12 Based on: M. Johnston, J. Chen, P. Ehlen, H. Jung, J. Lieske, A. Reddy, E. Selfridge, S. Stoyanchev, B. Vasilieff, and J. Wilpon. 2014. MVA: The Multimodal Virtual Assistant. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Association for Computational Linguistics, pp. 257–259.

      Figure 10.13 Based on: Wahlster, Wolfgang (2002): SmartKom: Fusion and Fission of Speech, Gestures, and Facial Expressions. In Proc. of the 1st International Workshop on Man-Machine Symbiotic Systems. Kyoto, Japan, pp. 213–225. Used with permission.

      Figure 10.14 Courtesy of openstream.com. Used with permission.

      Figure 10.15 (right) From: P. R. Cohen, E C. Kaiser, M. C. Buchanan, S. Lind, M. J. Corrigan, and R. M. Wesson. 2015. Sketch-thru-plan: a multimodal interface for command and control. Communications of the ACM, 58(4):56–65. Copyright © 2015 ACM. Used with permission.

      Figure 10.16 From: P. R. Cohen, E C. Kaiser, M. C. Buchanan, S. Lind, M. J. Corrigan, and R. M. Wesson. 2015. Sketch-thru-plan: a multimodal interface for command and control. Communications of the ACM, 58(4):56–65. Copyright © 2015 ACM. Used with permission.

      Figure 11.1 From: R. A. Bolt. 1980. Put-that-there: Voice and gesture at the graphics interface. ACM SIGGRAPH Computer Graphics, 14(3): 262–270. Copyright © 1980 ACM. Used with permission.

      Figure 11.1 (video) Courtesy of Chris Schmandt, MIT Media Lab Speech Interface group.

      Figure 11.3 From: P. Maragos, V. Pitsikalis, A. Katsamanis, G. Pavlakos, and S. Theodorakis. 2016. On shape recognition and language. In M. Breuss, A. Bruckstein, P. Maragos, and S. Wuhrer, eds., Perspectives in Shape Analysis. Springer. Copyright© 2016 Springer International Publishing Switzerland. Used with permission.

      Figure 11.4a (video) Courtesy of Botsquare.

      Figure 11.4b (video) Courtesy of Leap Motion.

      Figure 11.5 Based on: Krahnstoever, S. Kettebekov, M. Yeasin, and R. Sharma. 2002. A real-time framework for natural multimodal interaction with large screen displays. In Proceedings of the International Conference on Multimodal Interfaces, p. 349.

      Figure 11.6 Based on: L. Pigou, S. Dieleman, P.-J. Kindermans, and B. Schrauwen. 2015. Sign language recognition using convolutional neural networks. In L. Agapito, M. M. Bronstein, and C. Rother, eds., Computer Vision—ECCV 2014 Workshops, volume LNCS 8925, pp. 572–578.


Скачать книгу