The Science of Reading. Группа авторов
Читать онлайн книгу.in the situation model are inferences that require knowledge from both the text and the reader’s general knowledge. Bridging inferences are often required to make a text coherent (see O’Brien et al., 2015). For example, in reading “The bright sun lit the field. Alfred’s snowman melted,” one maintains coherence by inferring that the sun’s heat caused the snow to melt (Singer et al. 1992). When related knowledge triggers elaborative inferences, which are not required for coherence, comprehension becomes referentially richer and more interpretative, although unwarranted inferences can lead to inaccuracies. Successful comprehension attains a situation model that is enriched by inferences and referentially specific, but also well aligned with the text meaning.
Sentences.
In most text comprehension research, processing at the word‐ and sentence‐levels is assumed more than studied or specified. In fact, an important component of the Reading Systems Framework is missing from most models of text comprehension: the parsing processes that configure words into phrases into syntactic structures with associated meanings (see Liversedge et al., this volume). Research on sentence comprehension has sought to identify the multiple influences on these structure building and repair processes: implicit knowledge of grammatical structures, computational pressures on simplicity (Frazier & Rayner, 1982), statistical patterns of language use, and various lexical and contextual influences (Gibson & Pearlmutter, 1998). A major enduring issue is the relative influence of linguistic knowledge and knowledge of the world, two factors that are difficult (but possible) to separate (Warren & Dickey, 2021).
There is an intimate connection between building syntactic structure and building a situation model. To build a situation model from “The spy saw the cop with binoculars,” the reader must decide whether to attach “with binoculars” to “saw” or to “the cop” There is no information within the sentence to favor one structure over the other. In the absence of other information, the choice is influenced by a simplicity strategy (e.g., assume “the” begins a minimal noun phrase, which favors attaching “with binoculars” to “saw”). However, when the preceding text has established that there were two cops, one of whom has binoculars, then this preference is readily reversed (Britt et al., 1992). Readers generally wind up with the structure needed for the intended meaning, but this often follows an initial incorrect parse whose repair is revealed in reading measures (Frazier & Rayner, 1982).
These structure‐building processes are in the fast current of reading supported by multiple knowledge sources and co‐occurring with semantic integration processes. The result is the continuous updating of the reader’s situation model.
Incremental comprehension: Integration and prediction.
To the extent possible, readers integrate the meaning of each word into their ongoing representation of the text. These incremental processes use information momentarily accessible from different knowledge sources (linguistic knowledge, prior text knowledge, general knowledge). The integration of word meaning with text meaning – word‐to‐text integration – is the connection point of the word‐identification and comprehension systems, supported by knowledge systems with the lexicon playing a special role (Perfetti & Stafura, 2014). The fast currents of reading benefit from the force of these inputs, which ordinarily combine for smooth comprehension.
Methods with high temporal resolution are needed to observe these rapid integration processes. Event‐Related Potentials (ERPs) can reflect the temporal unfolding of multiple processes during the reading of a single word in a text. Reading a word produces ERP indicators of visual attention (P1), orthographical processing (N170), text‐related word meaning processes (N400), and memory‐related text processes (P600 or Late Positivity component (LPC)) (Luck & Kappenman, 2011). Meaning‐retrieval and early integration processes are observed in the 300–500 ms time window spanning the N400 and additional integration and updating processes are observed in the 500–700 ms window of the P600. The N400 has been considered an indicator of semantic fit between a word and its context since the benchmark study of Kutas and Hillyard (1980). They found that in sentence contrasts such as “He spread the warm bread with butter/socks,” a more negative N400 occurred on the contextually inappropriate “socks.” Countless studies since confirmed the N400 as an indicator of word meaning processing in relation to context (Kutas & Federmeier, 2011). A specific interpretation is that it is an early indicator of meaning‐based word‐to‐text integration (e.g., Nieuwland & Van Berkum, 2006; Stafura & Perfetti, 2014). An alternative proposal is that the N400 indicates only word meaning retrieval, while the word’s integration with text meaning occurs later, indexed by the P600 or Late Positivity component (Brouwer et al., 2012; Delogu et al., 2019).
Most ERP results in text comprehension include within‐sentence effects, with measures on words at the end of sentences, sometimes the middle. Examining words at the beginning of a sentence provides a clearer focus on text effects beyond within‐sentence effects. At the beginning of a sentence, the reader must open a new structure (e.g., a sentence, a noun phrase) where the only integration possible is with prior text. The general conclusion from sentence‐beginning studies is that integration occurs only when the word being read prompts retrieval of a text memory (Perfetti & Helder, 2020). When they occur, these integration effects result from co‐referential binding with meanings from the preceding sentence (Stafura & Perfetti, 2014), with an additional boost possible from global text meaning (Helder et al., 2020). Finally, although prediction effects are often found on words within sentences, Calloway and Perfetti (2017) found no role for word prediction at sentence beginnings when the (rated) integrability of a word into the text was controlled.
Prediction has become a central idea in explanations of comprehension. At first pass, prediction and integration seem to be opposite mechanisms: prediction, an anticipatory forward process and integration, a memory‐based process. However, in theoretical treatments, prediction has lost its meaning connection to everyday usage and given a much broader scope than predicting specific words. Kuperberg and Jaeger (2016) argued that predictive processes operate continuously while reading, influenced by multiple levels of linguistic units that pre‐activate meaning features at these different levels, rather than specific words. If we understand prediction in this broad sense, we can capture the complementary contributions of prediction and integration: The basic process is memory‐based integration occurring in overlapping phases. Reading a word can retrieve a text memory, initiating the integration processes that support coherence. This memory process is facilitated by the accessibility of meaning features that have been pre‐activated (“predicted”) by prior text meanings (Perfetti & Helder, 2020). This account removes prediction as a special process and appears consistent with a large‐scale replication study that suggests incremental processing can be interpreted as a “cascade of processes” comprising activation and integration of word meanings in their context (Nieuwland et al., 2020) and with other attempts to reframe “prediction” (Ferreira & Chantavarin, 2019; for reviews see Hauk, 2016; Nieuwland, 2019).
What neuroimaging studies add to comprehension research.
Our conclusion on the contribution of neuroimaging results is brief: Their contribution so far to comprehension theory is limited, especially in the context of comprehension of texts longer than one or two sentences. Early neuroimaging studies identified brain regions associated with reading narrative texts (e.g., Xu et al., 2005; Yarkoni et al., 2008) and correlated brain activation with behavioral measures of comprehension – for example, detection of coherence breaks (e.g., Ferstl et al., 2005; Hasson et al., 2007) and inference generation (Kuperberg et al., 2006; Virtue et al., 2006). A general conclusion is that text comprehension, beyond sentence comprehension, involves an extension of the language network (Ferstl et al., 2008). This network includes the left lateralized language areas in the frontal and temporal lobes identified in sentence comprehension, plus extension to the anterior temporal pole, prefrontal area, and the right hemisphere. These additional areas are broadly associated with semantic processing, executive functioning and inferencing, and coherence building and non‐literal meaning, respectively (Ferstl et al., 2008).