The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt
Читать онлайн книгу.as actuator frequency range determine the rendering limitations and provide an upper bound for the number of perceptually distinct stimuli. These specifications can be used to compare expressive capability among hardware elements (e.g., VT actuators).
Connection characteristics—body location, prototype assembly and materials, and contact mode (orientation, grip, tightness)—impact sensation distinguishablity [Gallace et al. 2007]. Karuei et al. [2011] reports differences in vibration detection thresholds on 13 different body locations and 2 different bodily states (e.g., walking vs. sitting).
Differences in perceptual and processing capabilities for those of different ages, visual acuity, profession, and simply genetics (Section 3.1.4) impact signal distinguishability [Goldreich and Kanics 2003]. Stevens and Choo [1996] report that the decline in tactile acuity with age affects all body locations, but has a larger impact on fingers and toes compared to more central body locations such as lips and tongue.
Context of use can impact haptic perception and processing capabilities, through parameters such as environment, body state (running vs. resting), and sensory and cognitive load and involvement (listening to music, driving) [Karuei et al. 2011, Blum et al. 2015]. This in turn determines the effective set size for distinguishable stimuli. For example, the number of different vibration notifications an individual can discern while driving a car (with its environmental vibrations, high sensory, and cognitive involvement) is smaller than when seated at an office desk.
3.3.3 The Meaning
Sometimes haptic signals are able to directly represent a meaning, e.g., through adequately high fidelity representation of a real physical sensation. More often, abstraction is required: perhaps the sensation being represented is beyond the capacity of the haptic device to display, or the information itself is abstract (“speed up”). Mapping haptic sensations to intended meaning—encoding the information—is a crucial design task that needs to be done in a consistent and compatible way across the full vocabulary used in an application, and sometimes more broadly [MacLean 2008b].
In this section, we discuss users’ cognitive meaning-mapping frameworks, then present encoding and vocabulary-development approaches that have been used by haptic designers.
Interpretive Schemas and Facets
To interpret haptic signals, people employ a number of conceptual or translational schemas, often combining them. We might compare a haptic sensation to a natural one (“This is like a cat purring”), to emotions and feelings (“This is boring”), or consider its potential usage (when a quickening tactile pulse sequence is described as a “speed up”). The meaning someone chooses is typically influenced by the sensation itself but also by the context of use and the user’s background and past experiences [Seifi et al. 2015, Schneider and MacLean 2014, Obrist et al. 2013].
Facets are a concept originating from the domain of library and information retrieval which nicely capture the multiplicity and flexibility of users’ sense-making schemas for haptic sensations. A facet is a set of related properties or labels that describe an aspect of an object [Fagan 2010]. Five descriptive facets have been proposed and examined for haptic vibrotactile stimuli (Figure 3.2, [Seifi et al. 2015]):
Figure 3.2 People use a variety of cognitive frameworks to make sense of haptic signals. Bottom left image (from Schneider et al. [2016]). Bottom right image courtesy of Anton Håkanson.
Physical properties that can be measured—such as duration, energy
Sensory properties—roughness, softness
Emotional connotations—pleasantness, urgency
Metaphors or familiar examples to describe a vibration’s feel—drumbeat, cat purring
Usage examples or types of events where a vibration fits—speed up, time’s up
If a designer neglects a consistent consideration of these meaning assignment facets the result is likely to be confusion and bad user experience. Leveraged properly, facet-driven mappings can be lead to more intuitive, consistent results and highlight pathways to work around individual differences, for example through tools that allow users to efficiently customize their interfaces (Section 3.5.1).
Stimulus Complexity and Vocabulary Composition
Interpretive facets for haptics are not as developed as for other modalities, either culturally or in research. There is a relative wealth of immediately reliable visual idioms, e.g., a graphical stop-sign icon. Instead, haptic designers typically need to devise custom vocabularies [MacLean 2008b]. These vary by application requirements, which dictate size and complexity of the required set as well as the context of use and the hardware that will deliver it.
We can start with simple signals. Simple vocabularies are composed of just two to three haptic-meaning pairs—binary and trinary sets, common in current mobile and wearable notification systems, easy to learn and adopt. The binary case can indicate on/off state of a parameter (e.g., a message has/has not arrived). A ternary vocabulary can distinguish three states (such as below/within/above a target zone, three levels of a volume being monitored, or three categories of notification types).
Next, we have complex signals (a.k.a icon design). More detailed encodings/vocabularies are possible when the hardware and context allow a larger set of distinct stimuli and the user can learn and process a larger mapping [MacLean 2008b]. One design approach is to map information elements to design and engineering parameters of the haptic sensation [Brewster and Brown 2004, Enriquez et al. 2006, Ternes and MacLean 2008]. For example, vibrotactile technologies allow control of frequency, amplitude, waveform, plus temporal sequencing, such as rhythm. In a vibrotactile message notification, amplitude can be mapped to urgency while rhythm can encode the sender group (family/friends vs. work). This approach has the hierarchical structure of a natural language (e.g., letters, words, sentences) [Enriquez et al. 2006].
An alternative approach uses metaphors for designing individual signals and sets of them in a haptic vocabulary. Here, the whole signal has a meaning but its individual components may not encode information, instead exploiting users’ interpretive frameworks for designing more intuitive vocabularies. In [Chan et al. 2008], a heartbeat indicates that the remote connection is live/on, using a metaphor framework.
In both approaches, designers can use perceptual techniques such as Multi-Dimensional Scaling (MDS) or psychophysical studies to prune and refine an initial stimulus set for salience and maximum recognizability [Maclean and Enriquez 2003, Lederman and Klatzky 2009], both prior to encoding and to adjust the final set to optimize distinguishability [Chan et al. 2008].
More complex vocabularies must be learned. Haptic-meaning pairs composed into vocabularies can utilize users’ interpretive frameworks or rely on learning through practice and memory. In the former case, the user should be able to recognize the associated meaning with no or minimal practice (e.g., an accelerating pulse sequence signifies “speed up”) whereas in the latter, sensations are arbitrarily assigned, necessitating prior exposure and memorization. In Figure 3.3, directions can be presented with two types of patterns, spatial and temporal: this particular spatial arrangement has a direct and recognizable perceptual association to the meaning, while the second pattern is arbitrary and will have to be learned.
Past studies suggest that users can learn large abstract vocabularies (56 pairs) with practice but the learning rate and performance can vary considerably across individuals [Swerdfeger 2009]. Users’ performance on large vocabularies with intuitive meaning assignment is yet to be fully studied, in part because of the difficulty of designing them.