The Science of Reading. Группа авторов

Читать онлайн книгу.

The Science of Reading - Группа авторов


Скачать книгу
appeal. There are deep concerns about literacy levels in the United States, United Kingdom, and many other countries, and great interest in using the “science of reading” to improve instruction and outcomes (Seidenberg et al., 2020). A theory of visual word recognition could contribute to improved educational practices but only if the theory is correct and speaks to relevant issues about how children learn.

      The dual‐route theory is an account of reading aloud. The two routes refer to procedures thought to be necessary and sufficient for generating pronunciations from print. The procedures involve orthography and phonology but not semantics, which is only used in reading aloud as a compensatory strategy in acquired dyslexia (Coltheart, 2006). Pritchard et al. (2018) also incorporated semantics in their account of learning to read.

      Here, we focus on modeling results that illustrate four types of problems with DRC models:

       Simulations of “benchmark” studies that were said to reproduce an effect (e.g., frequency X regularity interaction) deviated from the behavioral findings in important ways.

       The models consistently missimulated the results of other studies of the same phenomena but these were not reported, creating a modeling version of a “file‐drawer problem” (Simmons et al., 2011).

       The models exhibited other anomalous behaviors that were not discussed.

       The models did not address prominent phenomena that contradict the approach.

       Regularity effects

      Early studies showed that even for skilled adult readers, exception words produce longer naming latencies than regular words (Baron & Strawson, 1976), termed the regularity effect. Seidenberg et al. (1984) discovered an important additional fact: regularity interacts with frequency. Whereas higher frequency regular and exception words are read equally rapidly (and accurately), lower frequency exception words take longer than lower frequency regular words. This is a well‐replicated effect. It occurs with other types of linguistic information (e.g., Juliano & Tanenhaus, 1994; Pearlmutter & MacDonald, 1995) and reflects a general fact about cognition: The impact of atypical structure can be overcome with sufficient experience, even as it continues to affect performance on less common forms.

      Seidenberg and McClelland (1989) simulated several studies that yielded this interaction, showing that it arises in a network in which repetitions of word and subword patterns create a continuum of spelling‐sound consistency; see Plaut et al. (1996) for a formal analysis and further predictions regarding the use of semantics. The DRC models were attempts to replicate these effects (Coltheart et al., 2001). Within this framework, the basic regularity effect occurs because both routes yield the same pronunciations for regular words, but conflicting pronunciations for exceptions, which slows processing. The frequency X regularity interaction is explained by assuming that higher frequency regular and exception words are processed equally rapidly by the lexical route, but for lower frequency exception words, the outputs of the two routes conflict, yielding longer latencies than for regular words.

      The DRC model also erroneously produces regularity effects for both high‐ and low‐frequency words when tested on materials from other studies (e.g., Taraban & McClelland, 1987; Seidenberg et al., 1984; Seidenberg, 1985). Simulations of these studies were not reported, however. Including results for the simulation of one “benchmark” study said to produce a desired outcome but not the missimulations of other studies of the same phenomena is similar to the “file‐drawer problem” in other types of research.

       Consistency effects

      In the dual‐route approach, words are either regular (rule‐governed) or exceptions (rule‐violating). In connectionist models, words differ in their degree of spelling‐sound consistency. This is a fundamental difference between the theories and it is important to establish which claim about the English writing system is correct.

      The theoretically critical comparisons were identified in Glushko’s original study. In the dual‐route theory, wade is regular and have is an exception. What is wave? In the dual‐route approach it is rule‐governed, allowing generalization to nonwords such as mave. However, wave has a close, high‐frequency, irregular neighbor have. ‐ave is therefore less consistent than ‐ade. Glushko found that, for skilled college‐student readers, “regular but inconsistent” words such as wave yielded longer naming latencies than regular and consistent words such as wade. This is a crucial finding. Both types of words are regular/rule‐governed according to DRC and should therefore behave alike, but they do not. Accounting for the unexpected impact of words such as have on rule‐governed words such as wave therefore presented a significant challenge. Later studies showed that consistency effects are modulated by word frequency and reading skill: for younger and weaker readers they occur for higher frequency words such as gave, but with increases in reading skill, the effects are limited to less common words (Backman et al., 1984; Seidenberg, 1985; Waters et al., 1984). Consistency effects also occur for nonwords (Glushko, 1979). The SM89 model accurately simulated several studies of consistency effects, including the effects of frequency and reading skill, and nonword consistency. The effects arise because the weights on connections in the network reflect the aggregate impact of exposure to words. Learning about have shifts the weights away from values that are optimal for pronouncing wave. That yields slower pronunciation times than for words and nonwords with highly consistent spelling patterns such as wade and nade. The effects are graded, varying in degree, rather than categorical, as in the rule‐governed versus exception distinction.


Скачать книгу