The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt
Читать онлайн книгу.Courtesy of A. Roudaut, M. Baglioni, and E. Lecolinet. Used with permission.
Figure 4.16 (video) From: L.-P. Cheng, H.-S. Liang, C.-Y. Wu, and M. Y. Chen. 2013b. iGrasp: grasp-based adaptive keyboard for mobile devices. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’13). Copyright © 2013 ACM. Used with permission.
Figure 4.17 (video) From: M. F. M. Noor, A. Ramsay, S. Hughes, S. Rogers, J. Williamson, and R. Murray-Smith. 2014. 28 frames later: predicting screen touches from back-of-device grip changes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’14). Copyright © 2014 ACM. Used with permission.
Figure 4.18 From: K. Hinckley and B. Buxton. 2016. Inking outside the box: How context sensing affords more natural pen (and touch) computing. In T. Hammond, editor. Revolutionizing Education with Digital Ink. Springer International Publishing, Switzerland. Copyright © 2016 Springer. Used with permission.
Figure 4.19 (video) Courtesy of D. Yoon, K. Hinckley, H. Benko, F. Guimbretière, P. Irani, M. Pahud, and M. Gavriliu. Used with permission.
Figure 4.20 Based on: K. Hinckley. 1997. Haptic issues for virtual manipulation. Department of Computer Science, University of Virginia, Charlottesville, VA.
Figure 4.21 (video) Courtesy of Bill Buxton.
Figure 4.22 (video) Courtesy of Bill Buxton.
Figure 4.24 (video) Courtesy of K. Hinckley, M. Pahud, N. Coddington, J. Rodenhouse, A. Wilson, H. Benko, and B. Buxton. Used with permission.
Figure 4.26 Based on: K. Hinckley, M. Pahud, H. Benko, P. Irani, F. Guimbretiere, M. Gavriliu, X. Chen, F. Matulic, B. Buxton, and A. Wilson. 2014. Sensing techniques for tablet+stylus interaction. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST’14), Honolulu, HI. ACM, New York.
Figure 6.1 Based on: Wagner, P., Malisz, Z., & Kopp, S. (2014). Gesture and Speech in Interaction: An Overview. Speech Communication, 57(Special Iss.), 209–232.
Figure 6.2 Based on: Wagner, P., Malisz, Z., & Kopp, S. (2014). Gesture and Speech in Interaction: An Overview. Speech Communication, 57(Special Iss.), 209–232.
Figure 6.3 Based on: Kopp, S., Bergmann, K., & Kahl, S. (2013). A spreading-activation model of the semantic coordination of speech and gesture. Proceedings of the 35th Annual Meeting of the Cognitive Science Society (pp. 823–828). Austin, TX, USA: Cognitive Science Society.
Figure 6.4 Based on: Kopp, S., Bergmann, K., & Kahl, S. (2013). A spreading-activation model of the semantic coordination of speech and gesture. Proceedings of the 35th Annual Meeting of the Cognitive Science Society (pp. 823–828). Austin, TX, USA: Cognitive Science Society.
Figure 6.7 From: Bergmann, K., Kahl, S., & Kopp, S. (2013). Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Proceedings of the 13th International Conference on Intelligent Virtual Agents (pp. 203–216). Copyright © 2013, Springer-Verlag Berlin Heidelberg. Used with permission.
Figure 6.8 From: Bergmann, K., Kahl, S., & Kopp, S. (2013). Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Proceedings of the 13th International Conference on Intelligent Virtual Agents (pp. 203–216). Copyright © 2013, Springer-Verlag Berlin Heidelberg. Used with permission.
Figure 7.3 (video) From: Wilson, G., Davidson, G., & Brewster, S. (2015). In the Heat of the Moment: Subjective Interpretations of Thermal Feedback During Interaction. Proceedings CHI ’15, 2063–2072. Copyright © 2015 ACM. Used with permission.
Figure 7.4 From: David K. McGookin and Stephen A. Brewster. 2006. SoundBar: exploiting multiple views in multimodal graph browsing. In Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles (NordiCHI ’06), Anders Mørch, Konrad Morgan, Tone Bratteteig, Gautam Ghosh, and Dag Svanaes (Eds.), 145–154. Copyright © 2006 ACM. Used with permission.
Figure 7.5 (video) From: Ross McLachlan, Daniel Boland, and Stephen Brewster. 2014. Transient and transitional states: pressure as an auxiliary input modality for bimanual interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Copyright © 2014 ACM. Used with permission.
Figure 7.6 (video) Courtesy of David McGookin, Euan Robertson, and Stephen Brewster. Used with permission.
Figure 7.7 (left) Video courtesy of David McGookin and Stephen Brewster. Used with permission.
Figure 7.7 (right) Video courtesy of David McGookin and Stephen Brewster. Used with permission.
Figure 7.8 (left) From: Plimmer, B., Reid, P., Blagojevic, R., Crossan, A., & Brewster, S. (2011). Signing on the tactile line. ACM Transactions on Computer-Human Interaction, 18(3), 1–29. Copyright © 2011 ACM. Used with permission.
Figure 7.8 (right) From: Yu, W., & Brewster, S. (2002). Comparing two haptic interfaces for multimodal graph rendering. Proceedings HAPTICS ’02, 3–9. Copyright © 2002 IEEE. Used with permission.
Figure 7.9 (video) From: Beryl Plimmer, Peter Reid, Rachel Blagojevic, Andrew Crossan, and Stephen Brewster. 2011. Signing on the tactile line: A multimodal system for teaching handwriting to blind children. ACM Transactions on Computer-Human Interaction 18, 3, Article 17 (August 2011), 29 pages. Copyright © 2011 ACM. Used with permission.
Figure 7.10 (right) From: Euan Freeman, Stephen Brewster, and Vuokko Lantz. 2014. Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI ’14), 419–426. Copyright © 2014 ACM. Used with permission.
Figure 7.11 (left) Video courtesy of Euan Freeman. Used with permission.
Figure 7.11 (right) Video courtesy of Euan Freeman. Used with permission.
Figure 7.12 (video) From: Ioannis Politis, Stephen A. Brewster, and Frank Pollick. 2014. Evaluating multimodal driver displays under varying situational urgency. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). New York, NY, USA, 4067–4076. Copyright © 2014 ACM. Used with permission.
Figure 8.1 Based on: A. H. Maslow. 1954. Motivation and personality. Harper and Row.
Figure 8.2 (left) From: D. McColl, W.-Y. G. Louie, and G. Nejat. 2013. Brian 2.1: A socially assistive robot for the elderly and cognitively impaired. IEEE Robotics & Automation Magazine, 20(1): 74–83. Copyright © 2013 IEEE. Used with permission.
Figure 8.2 (right) From: P. Bovbel and G. Nejat, 2014. Casper: An Assistive Kitchen Robot to Promote Aging in Place. Journal of Medical Devices, 8(3), p.030945. Copyright © 2014 ASME. Used with permission.
Figure 8.2 (video) Courtesy of the Autonomous Systems and Biomechtronics Laboratory (ASBLab) at the University of Toronto
Figure 8.3 From: M. Nilsson, J. Ingvast, J. Wikander, and H. von Holst. 2012. The soft extra muscle system for improving the grasping capability in neurological rehabilitation. In Biomedical Engineering and Sciences (IECBES), 2012 IEEE EMBS Conference on, pp. 412–417. Copyright © 2012 IEEE. Used with permission.
Figure 8.4 From: T. Visser, M. Vastenburg, and D. Keyson. 2010. Snowglobe: the development of a prototype awareness system for longitudinal field studies. In Proc. 8th ACM Conference on Designing Interactive