Displaying 1 - 100 of 118
-
Vágvölgy, R., Bergström, K., Bulajic, A., Rüsseler, J., Fernandes, T., Grosche, M., Klatte, M., Huettig, F., & Lachmann, T. (in press). The cognitive profile of adults with low literacy skills in alphabetic orthographies: A systematic review and comparison with developmental dyslexia. Educational Research Review.
Abstract
Dealing with text is crucial in modern societies. However, not everyone acquires sufficient literacy skills during school education. This systematic review summarizes and synthesizes research on adults with low literacy skills (ALLS) in alphabetic writing systems, includes results from behavioral and neurobiological studies, and compares these findings with those on developmental dyslexia given that this developmental disorder is one possible explanation for low literacy skills in adulthood. Twenty-seven studies focusing on the cognitive profile of ALLS met the three predefined criteria of reading level, age, and education. Results showed that ALLS performed worse than literate adults in various tasks at skill and information processing level, and exhibited structural and functional differences at the neurobiological level. The cognitive profile of ALLS was closer to that of primary school children than of literate adults. However, relative to children, ALLS’ literacy skills relied less on phonological and more on orthographic strategies. A narrative comparison of results with meta-analyses on developmental dyslexia showed large, though not complete, overlap in the cognitive profiles. The present results helps to better understand the literacy skills and reading-related cognitive functions of ALLS and may support the development of tailored interventions directed to the specific cognitive difficulties ALLS have. -
Baths, V., Jartarkar, M., Sood, S., Lewis, A. G., Ostarek, M., & Huettig, F. (2024). Testing the involvement of low-level visual representations during spoken word processing with non-Western students and meditators practicing Sudarshan Kriya Yoga. Brain Research, 1838: 148993. doi:10.1016/j.brainres.2024.148993.
Abstract
Previous studies, using the Continuous Flash Suppression (CFS) paradigm, observed that (Western) university students are better able to detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Here we attempted to replicate this effect with non-Western university students in Goa (India). A second aim was to explore the performance of (non-Western) meditators practicing Sudarshan Kriya Yoga in Goa in the same task. Some previous literature suggests that meditators may excel in some tasks that tap visual attention, for example by exercising better endogenous and exogenous control of visual awareness than non-meditators. The present study replicated the finding that congruent spoken cue words lead to significantly higher detection sensitivity than incongruent cue words in non-Western university students. Our exploratory meditator group also showed this detection effect but both frequentist and Bayesian analyses suggest that the practice of meditation did not modulate it. Overall, our results provide further support for the notion that spoken words can activate low-level category-specific visual features that boost the basic capacity to detect the presence of a visual stimulus that has those features. Further research is required to conclusively test whether meditation can modulate visual detection abilities in CFS and similar tasks. -
Huettig, F., & Christiansen, M. H. (2024). Can large language models counter the recent decline in literacy levels? An important role for cognitive science. Cognitive Science, 48(8): e13487. doi:10.1111/cogs.13487.
Abstract
Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately US-based tech companies. -
Araujo, S., Narang, V., Misra, D., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., Hervais-Adelman, A., & Huettig, F. (2023). A literacy-related color-specific deficit in rapid automatized naming: Evidence from neurotypical completely illiterate and literate adults. Journal of Experimental Psychology: General, 152(8), 2403-2409. doi:10.1037/xge0001376.
Abstract
There is a robust positive relationship between reading skills and the time to name aloud an array of letters, digits, objects, or colors as quickly as possible. A convincing and complete explanation for the direction and locus of this association remains, however, elusive. In this study we investigated rapid automatized naming (RAN) of every-day objects and basic color patches in neurotypical illiterate and literate adults. Literacy acquisition and education enhanced RAN performance for both conceptual categories but this advantage was much larger for (abstract) colors than every-day objects. This result suggests that (i) literacy/education may be causal for serial rapid naming ability of non-alphanumeric items, (ii) differences in the lexical quality of conceptual representations can underlie the reading-related differential RAN performance.Additional information
supplementary text -
Ferreira, F., & Huettig, F. (2023). Fast and slow language processing: A window into dual-process models of cognition. [Open Peer commentary on De Neys]. Behavioral and Brain Sciences, 46: e121. doi:10.1017/S0140525X22003041.
Abstract
Our understanding of dual-process models of cognition may benefit from a consideration of language processing, as language comprehension involves fast and slow processes analogous to those used for reasoning. More specifically, De Neys's criticisms of the exclusivity assumption and the fast-to-slow switch mechanism are consistent with findings from the literature on the construction and revision of linguistic interpretations.
-
Huettig, F., Voeten, C. C., Pascual, E., Liang, J., & Hintz, F. (2023). Do autistic children differ in language-mediated prediction? Cognition, 239: 105571. doi:10.1016/j.cognition.2023.105571.
Abstract
Prediction appears to be an important characteristic of the human mind. It has also been suggested that prediction is a core difference of autistic children. Past research exploring language-mediated anticipatory eye movements in autistic children, however, has been somewhat contradictory, with some studies finding normal anticipatory processing in autistic children with low levels of autistic traits but others observing weaker prediction effects in autistic children with less receptive language skills. Here we investigated language-mediated anticipatory eye movements in young children who differed in the severity of their level of autistic traits and were in professional institutional care in Hangzhou, China. We chose the same spoken sentences (translated into Mandarin Chinese) and visual stimuli as a previous study which observed robust prediction effects in young children (Mani & Huettig, 2012) and included a control group of typically-developing children. Typically developing but not autistic children showed robust prediction effects. Most interestingly, autistic children with lower communication, motor, and (adaptive) behavior scores exhibited both less predictive and non-predictive visual attention behavior. Our results raise the possibility that differences in language-mediated anticipatory eye movements in autistic children with higher levels of autistic traits may be differences in visual attention in disguise, a hypothesis that needs further investigation.Additional information
Raw data and analysis code can be found here on OSF -
Huettig, F., & Ferreira, F. (2023). The myth of normal reading. Perspectives on Psychological Science, 18(4), 863-870. doi:10.1177/17456916221127226.
Abstract
We argue that the educational and psychological sciences must embrace the diversity of reading rather than chase the phantom of normal reading behavior. We critically discuss the research practice of asking participants in experiments to read “normally”. We then draw attention to the large cross-cultural and linguistic diversity around the world and consider the enormous diversity of reading situations and goals. Finally, we observe that people bring a huge diversity of brains and experiences to the reading task. This leads to certain implications. First, there are important lessons for how to conduct psycholinguistic experiments. Second, we need to move beyond Anglo-centric reading research and produce models of reading that reflect the large cross-cultural diversity of languages and types of writing systems. Third, we must acknowledge that there are multiple ways of reading and reasons for reading, and none of them is normal or better or a “gold standard”. Finally, we must stop stigmatizing individuals who read differently and for different reasons, and there should be increased focus on teaching the ability to extract information relevant to the person’s goals. What is important is not how well people decode written language and how fast people read but what people comprehend given their own stated goals. -
Hervais-Adelman, A., Kumar, U., Mishra, R., Tripathi, V., Guleria, A., Singh, J. P., & Huettig, F. (2022). How does literacy affect speech processing? Not by enhancing cortical responses to speech, but by promoting connectivity of acoustic-phonetic and graphomotor cortices. Journal of Neuroscience, 42(47), 8826-8841. doi:10.1523/JNEUROSCI.1125-21.2022.
Abstract
Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing, and enhances brain responses, as indexed by the blood-oxygenation level dependent signal (BOLD), to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from non-alphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abugida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent six months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated. -
Huettig, F., Audring, J., & Jackendoff, R. (2022). A parallel architecture perspective on pre-activation and prediction in language processing. Cognition, 224: 105050. doi:10.1016/j.cognition.2022.105050.
Abstract
A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced. Our basic premises are that (a) no prediction can be made without knowledge to support it; and (b) it is therefore necessary to characterize the precise form of that knowledge, as revealed by a suitable theory of linguistic representations. We describe the Parallel Architecture (PA: Jackendoff, 2002; Jackendoff and Audring, 2020), which makes explicit our commitments about linguistic representations, and we develop an account of processing based on these representations. Crucial to our account is that what have been traditionally treated as derivational rules of grammar are formalized by the PA as lexical items, encoded in the same format as words. We then present a theory of prediction in these terms: linguistic input activates lexical items whose beginning (or incipit) corresponds to the input encountered so far; and prediction amounts to pre-activation of the as yet unheard parts of those lexical items (the remainder). Thus the generation of predictions is a natural byproduct of processing linguistic representations. We conclude that the PA perspective on pre-activation provides a plausible account of prediction in language processing that bridges linguistic and psycholinguistic theorizing. -
Lee, R., Chambers, C. G., Huettig, F., & Ganea, P. A. (2022). Children’s and adults’ use of fictional discourse and semantic knowledge for prediction in language processing. PLoS One, 17(4): e0267297. doi:10.1371/journal.pone.0267297.
Abstract
Using real-time eye-movement measures, we asked how a fantastical discourse context competes with stored representations of real-world events to influence the moment-by-moment interpretation of a story by 7-year-old children and adults. Seven-year-olds were less effective at bypassing stored real-world knowledge during real-time interpretation than adults. Our results suggest that children privilege stored semantic knowledge over situation-specific information presented in a fictional story context. We suggest that 7-year-olds’ canonical semantic and conceptual relations are sufficiently strongly rooted in statistical patterns in language that have consolidated over time that they overwhelm new and unexpected information even when the latter is fantastical and highly salient.Additional information
Data availability -
Liu, Y., Hintz, F., Liang, J., & Huettig, F. (2022). Prediction in challenging situations: Most bilinguals can predict upcoming semantically-related words in their L1 source language when interpreting. Bilingualism: Language and Cognition, 25(5), 801-815. doi:10.1017/S1366728922000232.
Abstract
Prediction is an important part of language processing. An open question is to what extent people predict language in challenging circumstances. Here we tested the limits of prediction by asking bilingual Dutch native speakers to interpret Dutch sentences into their English counterparts. In two visual world experiments, we recorded participants’ eye movements to co-present visual objects while they engaged in interpreting tasks (consecutive and simultaneous interpreting). Most participants showed anticipatory eye movements to semantically-related upcoming target words in their L1 source language during both consecutive and simultaneous interpretation. A quarter of participants during simultaneous interpretation however did not move their eyes, an extremely unusual participant behaviour in visual world studies. Overall, the findings suggest that most people predict in the source language under challenging interpreting situations. Further work is required to understand the causes of the absence of (anticipatory) eye movements during simultaneous interpretation in a substantial subset of individuals. -
Morey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W. and 25 moreMorey, R. D., Kaschak, M. P., Díez-Álamo, A. M., Glenberg, A. M., Zwaan, R. A., Lakens, D., Ibáñez, A., García, A., Gianelli, C., Jones, J. L., Madden, J., Alifano, F., Bergen, B., Bloxsom, N. G., Bub, D. N., Cai, Z. G., Chartier, C. R., Chatterjee, A., Conwell, E., Cook, S. W., Davis, J. D., Evers, E., Girard, S., Harter, D., Hartung, F., Herrera, E., Huettig, F., Humphries, S., Juanchich, M., Kühne, K., Lu, S., Lynes, T., Masson, M. E. J., Ostarek, M., Pessers, S., Reglin, R., Steegen, S., Thiessen, E. D., Thomas, L. E., Trott, S., Vandekerckhove, J., Vanpaemel, W., Vlachou, M., Williams, K., & Ziv-Crispel, N. (2022). A pre-registered, multi-lab non-replication of the Action-sentence Compatibility Effect (ACE). Psychonomic Bulletin & Review, 29, 613-626. doi:10.3758/s13423-021-01927-8.
Abstract
The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre- registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero. -
Onnis, L., Lim, A., Cheung, S., & Huettig, F. (2022). Is the mind inherently predicting? Exploring forward and backward looking in language processing. Cognitive Science, 46(10): e13201. doi:10.1111/cogs.13201.
Abstract
Prediction is one characteristic of the human mind. But what does it mean to say the mind is a ’prediction machine’ and inherently forward looking as is frequently claimed? In natural languages, many contexts are not easily predictable in a forward fashion. In English for example many frequent verbs do not carry unique meaning on their own, but instead rely on another word or words that follow them to become meaningful. Upon reading take a the processor often cannot easily predict walk as the next word. But the system can ‘look back’ and integrate walk more easily when it follows take a (e.g., as opposed to make|get|have a walk). In the present paper we provide further evidence for the importance of both forward and backward looking in language processing. In two self-paced reading tasks and an eye-tracking reading task, we found evidence that adult English native speakers’ sensitivity to word forward and backward conditional probability significantly explained variance in reading times over and above psycholinguistic predictors of reading latencies. We conclude that both forward and backward-looking (prediction and integration) appear to be important characteristics of language processing. Our results thus suggest that it makes just as much sense to call the mind an ’integration machine’ which is inherently backward looking.Additional information
Open Data and Open Materials -
Araújo, S., Huettig, F., & Meyer, A. S. (2021). What underlies the deficit in rapid automatized naming (RAN) in adults with dyslexia? Evidence from eye movements. Scientific Studies of Reading, 25(6), 534-549. doi:10.1080/10888438.2020.1867863.
Abstract
This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low-, but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.
-
Arunkumar, M., Van Paridon, J., Ostarek, M., & Huettig, F. (2021). Do illiterates have illusions? A conceptual (non)replication of Luria (1976). Journal of Cultural Cognitive Science, 5, 143-158. doi:10.1007/s41809-021-00080-x.
Abstract
Luria (1976) famously observed that people who never learnt to read and write do not perceive visual illusions. We conducted a conceptual replication of the Luria study of the effect of literacy on the processing of visual illusions. We designed two carefully controlled experiments with 161 participants with varying literacy levels ranging from complete illiterates to high literates in Chennai, India. Accuracy and reaction time in the identification of two types of visual shape and color illusions and the identification of appropriate control images were measured. Separate statistical analyses of Experiments 1 and 2 as well as pooled analyses of both experiments do not provide any support for the notion that literacy effects the perception of visual illusions. Our large sample, carefully controlled study strongly suggests that literacy does not meaningfully affect the identification of visual illusions and raises some questions about other reports about cultural effects on illusion perception. -
Eviatar, Z., & Huettig, F. (
Eds. ). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science. -
Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
-
Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.
Abstract
Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction. -
Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.
Abstract
Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).Additional information
Online supplementary material -
Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.
Abstract
‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing. -
Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.
Abstract
Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.Additional information
supplementary material -
Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2021). Prediction in bilingual children: The missing piece of the puzzle. In E. Kaan, & T. Grüter (
Eds. ), Prediction in Second Language Processing and Learning (pp. 116-137). Amsterdam: Benjamins.Abstract
A wealth of studies has shown that more proficient monolingual speakers are better at predicting upcoming information during language comprehension. Similarly, prediction skills of adult second language (L2) speakers in their L2 have also been argued to be modulated by their L2 proficiency. How exactly language proficiency and prediction are linked, however, is yet to be systematically investigated. One group of language users which has the potential to provide invaluable insights into this link is bilingual children. In this paper, we compare bilingual children’s prediction skills with those of monolingual children and adult L2 speakers, and show how investigating bilingual children’s prediction skills may contribute to our understanding of how predictive processing works. -
Onnis, L., & Huettig, F. (2021). Can prediction and retrodiction explain whether frequent multi-word phrases are accessed ’precompiled’ from memory or compositionally constructed on the fly? Brain Research, 1772: 147674. doi:10.1016/j.brainres.2021.147674.
Abstract
An important debate on the architecture of the language faculty has been the extent to which it relies on a compositional system that constructs larger units from morphemes to words to phrases to utterances on the fly and in real time using grammatical rules; or a system that chunks large preassembled, stored units of language from memory; or some combination of both approaches. Good empirical evidence exists for both ’computed’ and ’large stored’ forms in language, but little is known about what shapes multi-word storage / access or compositional processing. Here we explored whether predictive and retrodictive processes are a likely determinant of multi-word storage / processing. Our results suggest that forward and backward predictability are independently informative in determining the lexical cohesiveness of multi-word phrases. In addition, our results call for a reevaluation of the role of retrodiction in contemporary language processing accounts (cf. Ferreira and Chantavarin 2018). -
Hu, Y., Lv, Q., Pascual, E., Liang, J., & Huettig, F. (2021). Syntactic priming in illiterate and literate older Chinese adults. Journal of Cultural Cognitive Science, 5, 267-286. doi:10.1007/s41809-021-00082-9.
Abstract
Does life-long literacy experience modulate syntactic priming in spoken language processing? Such a postulated influence is compatible with usage-based theories of language processing that propose that all linguistic skills are a function of accumulated experience with language across life. Here we investigated the effect of literacy experience on syntactic priming in Mandarin in sixty Chinese older adults from Hebei province. Thirty participants were completely illiterate and thirty were literate Mandarin speakers of similar age and socioeconomic background. We first observed usage differences: literates produced robustly more prepositional object (PO) constructions than illiterates. This replicates, with a different sample, language, and cultural background, previous findings that literacy experience affects (baseline) usage of PO and DO transitive alternates. We also observed robust syntactic priming for double-object (DO), but not prepositional-object (PO) dative alternations for both groups. The magnitude of this DO priming however was higher in literates than in illiterates. We also observed that cumulative adaptation in syntactic priming differed as a function of literacy. Cumulative syntactic priming in literates appears to be related mostly to comprehending others, whereas in illiterates it is also associated with repeating self-productions. Further research is needed to confirm this interpretation. -
Smith, A. C., Monaghan, P., & Huettig, F. (2021). The effect of orthographic systems on the developing reading system: Typological and computational analyses. Psychological Review, 128(1), 125-159. doi:10.1037/rev0000257.
Abstract
Orthographic systems vary dramatically in the extent to which they encode a language’s phonological and lexico-semantic structure. Studies of the effects of orthographic transparency suggest that such variation is likely to have major implications for how the reading system operates. However, such studies have been unable to examine in isolation the contributory effect of transparency on reading due to co-varying linguistic or socio-cultural factors. We first investigated the phonological properties of languages using the range of the world’s orthographic systems (alphabetic; alphasyllabic; consonantal; syllabic; logographic), and found that, once geographical proximity is taken into account, phonological properties do not relate to orthographic system. We then explored the processing implications of orthographic variation by training a connectionist implementation of the triangle model of reading on the range of orthographic systems whilst controlling for phonological and semantic structure. We show that the triangle model is effective as a universal model of reading, able to replicate key behavioural and neuroscientific results. Importantly, the model also generates new predictions deriving from an explicit description of the effects of orthographic transparency on how reading is realised and defines the consequences of orthographic systems on reading processes.
-
Speed, L., Chen, J., Huettig, F., & Majid, A. (2021). Classifier categories reflect, but do not affect conceptual organization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(4), 625-640. doi:10.1037/xlm0000967.
Abstract
Do we structure object-related conceptual information according to real-world sensorimotor experience, or can it also be shaped by linguistic information? This study investigates whether a feature of language coded in grammar—numeral classifiers—affects the conceptual representation of objects. We compared speakers of Mandarin (a classifier language) with speakers of Dutch (a language without classifiers) on how they judged object similarity in four studies. In the first three studies, participants had to rate how similar a target object was to four comparison objects, one of which shared a classifier with the target. Objects were presented as either words or pictures. Overall, the target object was always rated as most similar to the object with the shared classifier, but this was the case regardless of the language of the participant. In a final study employing a successive pile-sorting task, we also found that the underlying object concepts were similar for speakers of Mandarin and Dutch. Speakers of a non-classifier language are therefore sensitive to the same conceptual similarities that underlie classifier systems in a classifier language. Classifier systems may therefore reflect conceptual structure, rather than shape it. -
Vágvölgyi, R., Bergström, K., Bulajić, A., Klatte, M., Fernandes, T., Grosche, M., Huettig, F., Rüsseler, J., & Lachmann, T. (2021). Functional illiteracy and developmental dyslexia: Looking for common roots. A systematic review. Journal of Cultural Cognitive Science, 5, 159-179. doi:10.1007/s41809-021-00074-9.
Abstract
A considerable amount of the population in more economically developed countries are functionally illiterate (i.e., low literate). Despite some years of schooling and basic reading skills, these individuals cannot properly read and write and, as a consequence have problems to understand even short texts. An often-discussed approach (Greenberg et al., 1997) assumes weak phonological processing skills coupled with untreated developmental dyslexia as possible causes of functional illiteracy. Although there is some data suggesting commonalities between low literacy and developmental dyslexia, it is still not clear, whether these reflect shared consequences (i.e., cognitive and behavioral profile) or shared causes. The present systematic review aims at exploring the similarities and differences identified in empirical studies investigating both functional illiterate and developmental dyslexic samples. Nine electronic databases were searched in order to identify all quantitative studies published in English or German. Although a broad search strategy and few limitations were applied, only 5 studies have been identified adequate from the resulting 9269 references. The results point to the lack of studies directly comparing functional illiterate with developmental dyslexic samples. Moreover, a huge variance has been identified between the studies in how they approached the concept of functional illiteracy, particularly when it came to critical categories such the applied definition, terminology, criteria for inclusion in the sample, research focus, and outcome measures. The available data highlight the need for more direct comparisons in order to understand what extent functional illiteracy and dyslexia share common characteristics.
Additional information
supplementary materials -
Van Paridon, J., Ostarek, M., Arunkumar, M., & Huettig, F. (2021). Does neuronal recycling result in destructive competition? The influence of learning to read on the recognition of faces. Psychological Science, 32, 459-465. doi:10.1177/0956797620971652.
Abstract
Written language, a human cultural invention, is far too recent for dedicated neural
infrastructure to have evolved in its service. Culturally newly acquired skills (e.g. reading) thus ‘recycle’ evolutionarily older circuits that originally evolved for different, but similar functions (e.g. visual object recognition). The destructive competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions a cortical network originally evolved for. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background we find that even after adjusting for cognitive ability and test-taking familiarity, learning to read is associated with an increase, rather than a decrease, in object recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and consistent with the possibility that learning to read instead fine-tunes general object recognition mechanisms, a hypothesis that needs further neuroscientific investigation.Additional information
supplemental material -
Hintz, F., Meyer, A. S., & Huettig, F. (2020). Visual context constrains language-mediated anticipatory eye movements. Quarterly Journal of Experimental Psychology, 73(3), 458-467. doi:10.1177/1747021819881615.
Abstract
Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants’ eye movements as they listened to sentences in which an object was predictable based on the verb’s selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: The target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 seconds before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 second after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world.Additional information
Supplemental Material -
Hintz, F., Meyer, A. S., & Huettig, F. (2020). Activating words beyond the unfolding sentence: Contributions of event simulation and word associations to discourse reading. Neuropsychologia, 141: 107409. doi:10.1016/j.neuropsychologia.2020.107409.
Abstract
Previous studies have shown that during comprehension readers activate words beyond the unfolding sentence. An open question concerns the mechanisms underlying this behavior. One proposal is that readers mentally simulate the described event and activate related words that might be referred to as the discourse further unfolds. Another proposal is that activation between words spreads in an automatic, associative fashion. The empirical support for these proposals is mixed. Therefore, theoretical accounts differ with regard to how much weight they place on the contributions of these sources to sentence comprehension. In the present study, we attempted to assess the contributions of event simulation and lexical associations to discourse reading, using event-related brain potentials (ERPs). Participants read target words, which were preceded by associatively related words either appearing in a coherent discourse event (Experiment 1) or in sentences that did not form a coherent discourse event (Experiment 2). Contextually unexpected target words that were associatively related to the described events elicited a reduced N400 amplitude compared to contextually unexpected target words that were unrelated to the events (Experiment 1). In Experiment 2, a similar but reduced effect was observed. These findings support the notion that during discourse reading event simulation and simple word associations jointly contribute to language comprehension by activating words that are beyond contextually congruent sentence continuations. -
Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.
Abstract
A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.Additional information
Data files and script -
Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.
Abstract
Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning. -
Araújo, S., Fernandes, T., & Huettig, F. (2019). Learning to read facilitates retrieval of phonological representations in rapid automatized naming: Evidence from unschooled illiterate, ex-illiterate, and schooled literate adults. Developmental Science, 22(4): e12783. doi:10.1111/desc.12783.
Abstract
Rapid automatized naming (RAN) of visual items is a powerful predictor of reading skills. However, the direction and locus of the association between RAN and reading is still largely unclear. Here we investigated whether literacy acquisition directly bolsters RAN efficiency for objects, adopting a strong methodological design, by testing three groups of adults matched in age and socioeconomic variables, who differed only in literacy/schooling: unschooled illiterate and ex-illiterate, and schooled literate adults. To investigate in a fine-grained manner whether and how literacy facilitates lexical retrieval, we orthogonally manipulated the word-form frequency (high vs. low) and phonological neighborhood density (dense vs. spare) of the objects’ names. We observed that literacy experience enhances the automaticity with which visual stimuli (e.g., objects) can be retrieved and named: relative to readers (ex-illiterate and literate), illiterate adults performed worse on RAN. Crucially, the group difference was exacerbated and significant only for those items that were of low frequency and from sparse neighborhoods. These results thus suggest that, regardless of schooling and age at which literacy was acquired, learning to read facilitates the access to and retrieval of phonological representations, especially of difficult lexical items. -
Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.
Abstract
The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition. -
Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.
Abstract
Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.Additional information
aax0262_SM.pdf -
Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.
Abstract
Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening. -
Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.
Abstract
There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition. -
Mantegna, F., Hintz, F., Ostarek, M., Alday, P. M., & Huettig, F. (2019). Distinguishing integration and prediction accounts of ERP N400 modulations in language processing through experimental design. Neuropsychologia, 134: 107199. doi:10.1016/j.neuropsychologia.2019.107199.
Abstract
Prediction of upcoming input is thought to be a main characteristic of language processing (e.g. Altmann & Mirkovic, 2009; Dell & Chang, 2014; Federmeier, 2007; Ferreira & Chantavarin, 2018; Pickering & Gambi, 2018; Hale, 2001; Hickok, 2012; Huettig 2015; Kuperberg & Jaeger, 2016; Levy, 2008; Norris, McQueen, & Cutler, 2016; Pickering & Garrod, 2013; Van Petten & Luka, 2012). One of the main pillars of experimental support for this notion comes from studies that have attempted to measure electrophysiological markers of prediction when participants read or listened to sentences ending in highly predictable words. The N400, a negative-going and centro-parietally distributed component of the ERP occurring approximately 400ms after (target) word onset, has been frequently interpreted as indexing prediction of the word (or the semantic representations and/or the phonological form of the predicted word, see Kutas & Federmeier, 2011; Nieuwland, 2019; Van Petten & Luka, 2012; for review). A major difficulty for interpreting N400 effects in language processing however is that it has been difficult to establish whether N400 target word modulations conclusively reflect prediction rather than (at least partly) ease of integration. In the present exploratory study, we attempted to distinguish lexical prediction (i.e. ‘top-down’ activation) from lexical integration (i.e. ‘bottom-up’ activation) accounts of ERP N400 modulations in language processing.Additional information
Supplementary Data All Materials, data and analysis scripts on the Open Science Framework (OSF) -
Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.
Abstract
There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition. -
Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.
Abstract
Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation. -
Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.
Abstract
20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward. -
Smalle, E., Szmalec, A., Bogaerts, L., Page, M. P. A., Narang, V., Misra, D., Araujo, S., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., & Huettig, F. (2019). Literacy improves short-term serial recall of spoken verbal but not visuospatial items - Evidence from illiterate and literate adults. Cognition, 185, 144-150. doi:10.1016/j.cognition.2019.01.012.
Abstract
It is widely accepted that specific memory processes, such as serial-order memory, are involved in written language development and predictive of reading and spelling abilities. The reverse question, namely whether orthographic abilities also affect serial-order memory, has hardly been investigated. In the current study, we compared 20 illiterate people with a group of 20 literate matched controls on a verbal and a visuospatial version of the Hebb paradigm, measuring both short- and long-term serial-order memory abilities. We observed better short-term serial-recall performance for the literate compared with the illiterate people. This effect was stronger in the verbal than in the visuospatial modality, suggesting that the improved capacity of the literate group is a consequence of learning orthographic skills. The long-term consolidation of ordered information was comparable across groups, for both stimulus modalities. The implications of these findings for current views regarding the bi-directional interactions between memory and written language development are discussed. -
Huettig, F., Kolinsky, R., & Lachmann, T. (2018). The culturally co-opted brain: How literacy affects the human mind. Language, Cognition and Neuroscience, 33(3), 275-277. doi:10.1080/23273798.2018.1425803.
Abstract
Introduction to the special issue 'The Effects of Literacy on Cognition and Brain Functioning' -
Huettig, F., Kolinsky, R., & Lachmann, T. (
Eds. ). (2018). The effects of literacy on cognition and brain functioning [Special Issue]. Language, Cognition and Neuroscience, 33(3). -
Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.
Abstract
The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research. -
Mani, N., Mishra, R. K., & Huettig, F. (
Eds. ). (2018). The interactive mind: Language, vision and attention. Chennai: Macmillan Publishers India. -
Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (
Eds. ), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India. -
Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (
Eds. ), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India. -
Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.
Abstract
Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.Additional information
Data sets -
Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.
Abstract
Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets. -
De Groot, F., Huettig, F., & Olivers, C. N. L. (2017). Language-induced visual and semantic biases in visual search are subject to task requirements. Visual Cognition, 25, 225-240. doi:10.1080/13506285.2017.1324934.
Abstract
Visual attention is biased by both visual and semantic representations activated by words. We investigated to what extent language-induced visual and semantic biases are subject to task demands. Participants memorized a spoken word for a verbal recognition task, and performed a visual search task during the retention period. Crucially, while the word had to be remembered in all conditions, it was either relevant for the search (as it also indicated the target) or irrelevant (as it only served the memory test afterwards). On critical trials, displays contained objects that were visually or semantically related to the memorized word. When the word was relevant for the search, eye movement biases towards visually related objects arose earlier and more strongly than biases towards semantically related objects. When the word was irrelevant, there was still evidence for visual and semantic biases, but these biases were substantially weaker, and similar in strength and temporal dynamics, without a visual advantage. We conclude that language-induced attentional biases are subject to task requirements. -
Hintz, F., Meyer, A. S., & Huettig, F. (2017). Predictors of verb-mediated anticipatory eye movements in the visual world. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1352-1374. doi:10.1037/xlm0000388.
Abstract
Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of five potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners’ production fluency, receptive vocabulary knowledge, and non-verbal intelligence. In three eye-tracking experiments, participants looked at sets of four objects and listened to sentences where the final word was predictable or not predictable (e.g., “The man peels/draws an apple”). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and non-verbal intelligence was only a very weak predictor of anticipatory eye movements. Participants’ production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. -
Huettig, F., Mishra, R. K., & Padakannaya, P. (2017). Editorial. Journal of Cultural Cognitive Science, 1( 1), 1. doi:10.1007/s41809-017-0006-2.
-
Lee, R., Chambers, C. G., Huettig, F., & Ganea, P. A. (2017). Children’s semantic and world knowledge overrides fictional information during anticipatory linguistic processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (
Eds. ), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (CogSci 2017) (pp. 730-735). Austin, TX: Cognitive Science Society.Abstract
Using real-time eye-movement measures, we asked how a fantastical discourse context competes with stored representations of semantic and world knowledge to influence children's and adults' moment-by-moment interpretation of a story. Seven-year- olds were less effective at bypassing stored semantic and world knowledge during real-time interpretation than adults. Nevertheless, an effect of discourse context on comprehension was still apparent.Additional information
https://mindmodeling.org/cogsci2017/papers/0147/paper0147.pdf -
Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.
Abstract
While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing. -
Ostarek, M., & Huettig, F. (2017). Spoken words can make the invisible visible – Testing the involvement of low-level visual representations in spoken word processing. Journal of Experimental Psychology: Human Perception and Performance, 43, 499-508. doi:10.1037/xhp0000313.
Abstract
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" -> picture of a bottle) vs. incongruent ("bottle" -> picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at 600ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. -
Ostarek, M., & Huettig, F. (2017). A task-dependent causal role for low-level visual processes in spoken word comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1215-1224. doi:10.1037/xlm0000375.
Abstract
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete vs. abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation.Additional information
XLM-2016-2822_supp.docx -
Skeide, M. A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2017). Learning to read alters cortico-subcortical crosstalk in the visual system of illiterates. Science Advances, 5(3): e1602612. doi:10.1126/sciadv.1602612.
Abstract
Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and
the thalamus. Individual rates of connectivity increase were significantly related to the individualdecoding skill gains. These findings crucially complement current neurobiological concepts ofnormal and impaired literacy acquisition. -
Smith, A. C., Monaghan, P., & Huettig, F. (2017). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Journal of Memory and Language, 93, 276-303. doi:10.1016/j.jml.2016.08.005.
Abstract
Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.Additional information
http://www.sciencedirect.com/science/article/pii/S0749596X16301425 -
Bobb, S., Huettig, F., & Mani, N. (2016). Predicting visual information during sentence processing: Toddlers activate an object's shape before it is mentioned. Journal of Experimental Child Psychology, 151, 51-64. doi:10.1016/j.jecp.2015.11.002.
Abstract
We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual-shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. 30-month-old toddlers heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts: Hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers. -
De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search. Visual Cognition, 24, 226-245. doi:10.1080/13506285.2016.1221013.
Abstract
When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than fixating unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments, the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present. -
De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 180-196. doi:10.1037/xhp0000102.
Abstract
An important question is to what extent visual attention is driven by the semantics of individual objects, rather than by their visual appearance. This study investigates the hypothesis that timing is a crucial factor in the occurrence and strength of semantic influences on visual orienting. To assess the dynamics of such influences, the target instruction was presented either before or after visual stimulus onset, while eye movements were continuously recorded throughout the search. The results show a substantial but delayed bias in orienting towards semantically related objects compared to visually related objects when target instruction is presented before visual stimulus onset. However, this delay can be completely undone by presenting the visual information before the target instruction (Experiment 1). Moreover, the absence or presence of visual competition does not change the temporal dynamics of the semantic bias (Experiment 2). Visual orienting is thus driven by priority settings that dynamically shift between visual and semantic representations, with each of these types of bias operating largely independently. The findings bridge the divide between the visual attention and the psycholinguistic literature. -
De Groot, F., Koelewijn, T., Huettig, F., & Olivers, C. N. L. (2016). A stimulus set of words and pictures matched for visual and semantic similarity. Journal of Cognitive Psychology, 28(1), 1-15. doi:10.1080/20445911.2015.1101119.
Abstract
Researchers in different fields of psychology have been interested in how vision and language interact, and what type of representations are involved in such interactions. We introduce a stimulus set that facilitates such research (available online). The set consists of 100 words each of which is paired with four pictures of objects: One semantically similar object (but visually dissimilar), one visually similar object (but semantically dissimilar), and two unrelated objects. Visual and semantic similarity ratings between corresponding items are provided for every picture for Dutch and for English. In addition, visual and linguistic parameters of each picture are reported. We thus present a stimulus set from which researchers can select, on the basis of various parameters, the items most optimal for their research question.Files private
Request files -
Hintz, F., Meyer, A. S., & Huettig, F. (2016). Encouraging prediction during production facilitates subsequent comprehension: Evidence from interleaved object naming in sentence context and sentence reading. Quarterly Journal of Experimental Psychology, 69(6), 1056-1063. doi:10.1080/17470218.2015.1131309.
Abstract
Many studies have shown that a supportive context facilitates language comprehension. A currently influential view is that language production may support prediction in language comprehension. Experimental evidence for this, however, is relatively sparse. Here we explored whether encouraging prediction in a language production task encourages the use of predictive contexts in an interleaved comprehension task. In Experiment 1a, participants listened to the first part of a sentence and provided the final word by naming aloud a picture. The picture name was predictable or not predictable from the sentence context. Pictures were named faster when they could be predicted than when this was not the case. In Experiment 1b the same sentences, augmented by a final spill-over region, were presented in a self-paced reading task. No difference in reading times for predictive vs. non-predictive sentences was found. In Experiment 2, reading and naming trials were intermixed. In the naming task, the advantage for predictable picture names was replicated. More importantly, now reading times for the spill-over region were considerable faster for predictive vs. non-predictive sentences. We conjecture that these findings fit best with the notion that prediction in the service of language production encourages the use of predictive contexts in comprehension. Further research is required to identify the exact mechanisms by which production exerts its influence on comprehension. -
Huettig, F., & Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Language, Cognition and Neuroscience, 31(1), 80-93. doi:10.1080/23273798.2015.1047459.
Abstract
It is now well established that anticipation of up-coming input is a key characteristic of spoken language comprehension. Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed however has hardly been explored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32 to 77 years of age received spoken instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM" - look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use gender information from the article to predict the upcoming target object. The average participant anticipated the target objects well in advance of the critical noun. Multiple regression analyses showed that working memory and processing speed had the largest mediating effects: Enhanced working memory abilities and faster processing speed supported anticipatory spoken language processing. These findings suggest that models of predictive language processing must take mediating factors such as working memory and processing speed into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual-spatial representations. -
Huettig, F., & Mani, N. (2016). Is prediction necessary to understand language? Probably not. Language, Cognition and Neuroscience, 31(1), 19-31. doi:10.1080/23273798.2015.1072223.
Abstract
Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. In the present opinion paper we evaluate this proposal. We first critically discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. We point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. We also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, we discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature. -
Lai, V. T., & Huettig, F. (2016). When prediction is fulfilled: Insight from emotion processing. Neuropsychologia, 85, 110-117. doi:10.1016/j.neuropsychologia.2016.03.014.
Abstract
Research on prediction in language processing has focused predominantly on the function of predictive context and less on the potential contribution of the predicted word. The present study investigated how meaning that is not immediately prominent in the contents of predictions but is part of the predicted words influences sentence processing. We used emotional meaning to address this question. Participants read emotional and neutral words embedded in highly predictive and non-predictive sentential contexts, with the two sentential contexts rated similarly for their emotional ratings. Event Related Potential (ERP) effects of prediction and emotion both started at ~200 ms. Confirmed predictions elicited larger P200s than violated predictions when the target words were non-emotional (neutral), but such effect was absent when the target words were emotional. Likewise, emotional words elicited larger P200s than neutral words when the target words were non-predictive, but such effect were absent when the contexts were predictive. We conjecture that the prediction and emotion effects at ~200 ms may share similar neural process(es). We suggest that such process(es) could be affective, where confirmed predictions and word emotion give rise to ‘aha’ or reward feelings, and/or cognitive, where both prediction and word emotion quickly engage attentionAdditional information
Lai_Huettig_2016_supp.xlsx -
Mani, N., Daum, M., & Huettig, F. (2016). “Pro-active” in many ways: Developmental evidence for a dynamic pluralistic approach to prediction. Quarterly Journal of Experimental Psychology, 69(11), 2189-2201. doi:10.1080/17470218.2015.1111395.
Abstract
The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children’s prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context-dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing. -
Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016). Same, different, or closely related: What is the relationship between language production and comprehension? Journal of Memory and Language, 89, 1-7. doi:10.1016/j.jml.2016.03.002.
-
Meyer, A. S., & Huettig, F. (
Eds. ). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89. -
Smith, A. C., Monaghan, P., & Huettig, F. (2016). Complex word recognition behaviour emerges from the richness of the word learning environment. In K. Twomey, A. C. Smith, G. Westermann, & P. Monaghan (
Eds. ), Neurocomputational Models of Cognitive Development and Processing: Proceedings of the 14th Neural Computation and Psychology Workshop (pp. 99-114). Singapore: World Scientific. doi:10.1142/9789814699341_0007.Abstract
Computational models can reflect the complexity of human behaviour by implementing multiple constraints within their architecture, and/or by taking into account the variety and richness of the environment to which the human is responding. We explore the second alternative in a model of word recognition that learns to map spoken words to visual and semantic representations of the words’ concepts. Critically, we employ a phonological representation utilising coarse-coding of the auditory stream, to mimic early stages of language development that are not dependent on individual phonemes to be isolated in the input, which may be a consequence of literacy development. The model was tested at different stages during training, and was able to simulate key behavioural features of word recognition in children: a developing effect of semantic information as a consequence of language learning, and a small but earlier effect of phonological information on word processing. We additionally tested the role of visual information in word processing, generating predictions for behavioural studies, showing that visual information could have a larger effect than semantics on children’s performance, but that again this affects recognition later in word processing than phonological information. The model also provides further predictions for performance of a mature word recognition system in the absence of fine-coding of phonology, such as in adults who have low literacy skills. The model demonstrated that such phonological effects may be reduced but are still evident even when multiple distractors from various modalities are present in the listener’s environment. The model demonstrates that complexity in word recognition can emerge from a simple associative system responding to the interactions between multiple sources of information in the language learner’s environment. -
Speed, L., Chen, J., Huettig, F., & Majid, A. (2016). Do classifier categories affect or reflect object concepts? In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (
Eds. ), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2267-2272). Austin, TX: Cognitive Science Society.Abstract
We conceptualize objects based on sensory and motor information gleaned from real-world experience. But to what extent is such conceptual information structured according to higher level linguistic features too? Here we investigate whether classifiers, a grammatical category, shape the conceptual representations of objects. In three experiments native Mandarin speakers (speakers of a classifier language) and native Dutch speakers (speakers of a language without classifiers) judged the similarity of a target object (presented as a word or picture) with four objects (presented as words or pictures). One object shared a classifier with the target, the other objects did not, serving as distractors. Across all experiments, participants judged the target object as more similar to the object with the shared classifier than distractor objects. This effect was seen in both Dutch and Mandarin speakers, and there was no difference between the two languages. Thus, even speakers of a non-classifier language are sensitive to object similarities underlying classifier systems, and using a classifier system does not exaggerate these similarities. This suggests that classifier systems simply reflect, rather than affect, conceptual structure.Additional information
https://mindmodeling.org/cogsci2016/papers/0393/index.html -
Hintz, F., & Huettig, F. (2015). The complexity of the visual environment modulates language-mediated eye gaze. In R. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and Vision in Language Processing (pp. 39-55). Berlin: Springer. doi:10.1007/978-81-322-2443-3_3.Abstract
Three eye-tracking experiments investigated the impact of the complexity of the visual environment on the likelihood of word-object mapping taking place at phonological, semantic and visual levels of representation during language-mediated visual search. Dutch participants heard spoken target words while looking at four objects embedded in displays of different complexity and indicated the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word “beaker”, the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were presented in simple four-object displays (Experiment 2), there were clear attentional biases to all three types of competitors replicating earlier research (Huettig and McQueen, 2007). When the objects were embedded in complex scenes including four human-like characters or four meaningless visual shapes (Experiments 1, 3), there were biases in looks to visual and semantic but not to phonological competitors. In both experiments, however, we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects nevertheless had been retrieved. These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search. -
Huettig, F., & Brouwer, S. (2015). Delayed anticipatory spoken language processing in adults with dyslexia - Evidence from eye-tracking. Dyslexia, 21(2), 97-122. doi:10.1002/dys.1497.
Abstract
It is now well-established that anticipation of up-coming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here we investigated whether anticipatory spoken language processing is related to individuals’ word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM", look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target and thus participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. -
Huettig, F. (2015). Four central questions about prediction in language processing. Brain Research, 1626, 118-135. doi:10.1016/j.brainres.2015.02.014.
Abstract
The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing. -
Huettig, F., Srinivasan, N., & Mishra, R. (2015). Introduction to 'Attention and vision in language processing'. In R. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and vision in language processing. (pp. V-IX). Berlin: Springer. -
Huettig, F. (2015). Literacy influences cognitive abilities far beyond the mastery of written language. In I. van de Craats, J. Kurvers, & R. van Hout (
Eds. ), Adult literacy, second language, and cognition. LESLLA Proceedings 2014. Nijmegen: Centre for Language Studies.Abstract
Recent experimental evidence from cognitive psychology and cognitive neuroscience shows that reading acquisition has non-trivial consequences for cognitive processes other than reading per se. In the present chapter I present evidence from three areas of cognition: phonological processing, prediction in language processing, and visual search. These findings suggest that literacy on cognition influences are far-reaching. This implies that a good understanding of the dramatic impact of literacy acquisition on the human mind is an important prerequisite for successful education policy development and guidance of educational support. -
Mishra, R., Srinivasan, N., & Huettig, F. (
Eds. ). (2015). Attention and vision in language processing. Berlin: Springer. doi:10.1007/978-81-322-2443-3. -
Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.
Abstract
During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing. -
Guerra, E., Huettig, F., & Knoeferle, P. (2014). Assessing the time course of the influence of featural, distributional and spatial representations during reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2309-2314). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/402/.Abstract
What does semantic similarity between two concepts mean? How could we measure it? The way in which semantic similarity is calculated might differ depending on the theoretical notion of semantic representation. In an eye-tracking reading experiment, we investigated whether two widely used semantic similarity measures (based on featural or distributional representations) have distinctive effects on sentence reading times. In other words, we explored whether these measures of semantic similarity differ qualitatively. In addition, we examined whether visually perceived spatial distance interacts with either or both of these measures. Our results showed that the effect of featural and distributional representations on reading times can differ both in direction and in its time course. Moreover, both featural and distributional information interacted with spatial distance, yet in different sentence regions and reading measures. We conclude that featural and distributional representations are distinct components of semantic representation. -
Huettig, F. (2014). Role of prediction in language learning. In P. J. Brooks, & V. Kempe (
Eds. ), Encyclopedia of language development (pp. 479-481). London: Sage Publications. -
Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.
Abstract
At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning. -
Mani, N., & Huettig, F. (2014). Word reading skill predicts anticipation of upcoming spoken language input: A study of children developing proficiency in reading. Journal of Experimental Child Psychology, 126, 264-279. doi:10.1016/j.jecp.2014.05.004.
Abstract
Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants’ literacy skills. Against this background, the current study takes a look at the role of word reading skill in listener’s anticipation of upcoming spoken language input in children at the cusp of learning to read: if reading skills impact predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-old children on their prediction of upcoming spoken language input in an eye-tracking task. While children, like in previous studies to-date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children’s word reading (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition) skills and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations which in turn also supports anticipation of upcoming spoken words. -
McQueen, J. M., & Huettig, F. (2014). Interference of spoken word recognition through phonological priming from visual objects and printed words. Attention, Perception & Psychophysics, 76, 190-200. doi:10.3758/s13414-013-0560-8.
Abstract
Three cross-modal priming experiments examined the influence of pre-exposure to
pictures and printed words on the speed of spoken word recognition. Targets for
auditory lexical decision were spoken Dutch words and nonwords, presented in
isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory
stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijlzwaard, arrow-sword), or were unrelated on both dimensions. Phonological
interference and semantic facilitation were observed in all experiments. Priming
magnitude was similar for pictures and printed words, and did not vary with picture
viewing time or number of pictures in the display (either one or four). These effects
arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision-making. This suggests
that, by default, processing of related pictures and printed words influences how
quickly we recognize related spoken words. -
Olivers, C. N. L., Huettig, F., Singh, J. P., & Mishra, R. K. (2014). The influence of literacy on visual search. Visual Cognition, 21, 74-101. doi:10.1080/13506285.2013.875498.
Abstract
Currently one in five adults is still unable to read despite a rapidly developing world. Here we show that (il)literacy has important consequences for the cognitive ability of selecting relevant information from a visual display of non-linguistic material. In two experiments we compared low to high literacy observers on both an easy and a more difficult visual search task involving different types of chicken. Low literates were consistently slower (as indicated by overall RTs) in both experiments. More detailed analyses, including eye movement measures, suggest that the slowing is partly due to display wide (i.e. parallel) sensory processing but mainly due to post-selection processes, as low literates needed more time between fixating the target and generating a manual response. Furthermore, high and low literacy groups differed in the way search performance was distributed across the visual field. High literates performed relatively better when the target was presented in central regions, especially on the right. At the same time, high literacy was also associated with a more general bias towards the top and the left, especially in the more difficult search. We conclude that learning to read results in an extension of the functional visual field from the fovea to parafoveal areas, combined with some asymmetry in scan pattern influenced by the reading direction, both of which also influence other (e.g. non-linguistic) tasks such as visual search.Files private
Request files -
Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.Abstract
The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’. -
Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.Abstract
When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process. -
Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (
Eds. ), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.Abstract
Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition. -
Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.
Abstract
Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings. -
Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.
Abstract
In two eye-tracking experiments we examined whether wider discourse information helps
the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
sentences from a casual speech corpus containing canonical and reduced target words. Target
word recognition was assessed by measuring eye fixation proportions to four printed words
on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
and an unrelated distractor. Target sentences were presented in isolation or with a wider
discourse context. Experiment 1 revealed that target recognition was facilitated by wider
discourse information. Importantly, the recognition of reduced forms improved significantly
when preceded by strongly rather than by weakly supportive discourse contexts. This was not
the case for canonical forms: listeners' target word recognition was not dependent on the
degree of supportive context. Experiment 2 showed that the differential context effects in
Experiment 1 were not due to an additional amount of speaker information. Thus, these data
suggest that in natural settings a strongly supportive discourse context is more important for
the recognition of reduced forms than the recognition of canonical forms. -
Gauvin, H. S., Hartsuiker, R. J., & Huettig, F. (2013). Speech monitoring and phonologically-mediated eye gaze in language perception and production: A comparison using printed word eye-tracking. Frontiers in Human Neuroscience, 7: 818. doi:10.3389/fnhum.2013.00818.
Abstract
The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one’s own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else’s speech does in speech perception experiments. This suggests that speakers listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one’s own and someone else’s speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. -
Huettig, F. (2013). Young children’s use of color information during language-vision mapping. In B. R. Kar (
Ed. ), Cognition and brain development: Converging evidence from various methodologies (pp. 368-391). Washington, DC: American Psychological Association Press. -
Mani, N., & Huettig, F. (2013). Towards a complete multiple-mechanism account of predictive language processing [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 365-366. doi:10.1017/S0140525X12002646.
Abstract
Although we agree with Pickering & Garrod (P&G) that prediction-by-simulation and prediction-by-association are important mechanisms of anticipatory language processing, this commentary suggests that they: (1) overlook other potential mechanisms that might underlie prediction in language processing, (2) overestimate the importance of prediction-by-association in early childhood, and (3) underestimate the complexity and significance of several factors that might mediate prediction during language processing. -
Mani, N., Johnson, E., McQueen, J. M., & Huettig, F. (2013). How yellow is your banana? Toddlers' language-mediated visual search in referent-present tasks. Developmental Psychology, 49, 1036-1044. doi:10.1037/a0029382.
Abstract
What is the relative salience of different aspects of word meaning in the developing lexicon? The current study examines the time-course of retrieval of semantic and color knowledge associated with words during toddler word recognition: at what point do toddlers orient towards an image of a yellow cup upon hearing color-matching words such as “banana” (typically yellow) relative to unrelated words (e.g., “house”)? Do children orient faster to semantic matching images relative to color matching images, e.g., orient faster to an image of a cookie relative to a yellow cup upon hearing the word “banana”? The results strongly suggest a prioritization of semantic information over color information in children’s word-referent mappings. This indicates that, even for natural objects (e.g., food, animals that are more likely to have a prototypical color), semantic knowledge is a more salient aspect of toddler's word meaning than color knowledge. For 24-month-old Dutch toddlers, bananas are thus more edible than they are yellow. -
Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? In V. S. C. Pammi, & N. Srinivasan (
Eds. ), Progress in Brain Research: Decision making: Neural and behavioural approaches (pp. 135-149). New York: Elsevier.Abstract
Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed. -
Rommers, J., Meyer, A. S., & Huettig, F. (2013). Object shape and orientation do not routinely influence performance during language processing. Psychological Science, 24, 2218-2225. doi:10.1177/0956797613490746.
Abstract
The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.Additional information
DS_10.1177_0956797613490746.pdf -
Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to. Neuropsychologia, 51(3), 437-447. doi:10.1016/j.neuropsychologia.2012.12.002.
Abstract
When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words’ referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners’ eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., “moon” in “In 1969 Neil Armstrong was the first man to set foot on the moon”). 500 ms before the acoustic onset of the critical word, participants were shown four-object displays featuring three unrelated distractor objects and a critical object, which was either the target object (e.g., moon), an object with a similar shape (e.g., tomato), or an unrelated control object (e.g., rice). In a time window before shape information from the spoken target word could be retrieved, participants already tended to fixate both the target and the shape competitors more often than they fixated the control objects, indicating that they had anticipatorily activated the shape of the upcoming word's referent. This was confirmed in Experiment 2, which was an ERP experiment without picture displays. Participants listened to the same lead-in sentences as in Experiment 1. The sentence-final words corresponded to the predictable target, the shape competitor, or the unrelated control object (yielding, for instance, “In 1969 Neil Armstrong was the first man to set foot on the moon/tomato/rice”). N400 amplitude in response to the final words was significantly attenuated in the shape-related compared to the unrelated condition. Taken together, these results suggest that listeners can activate perceptual attributes of objects before they are referred to in an utterance. -
Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4: 528. doi:10.3389/fpsyg.2013.00528.
Abstract
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. -
Brouwer, S., Mitterer, H., & Huettig, F. (2012). Speech reductions change the dynamics of competition during spoken word recognition. Language and Cognitive Processes, 27(4), 539-571. doi:10.1080/01690965.2011.555268.
Abstract
Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter’’ for ‘‘computer’’) modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer’’), a competitor similar to the canonical form (e.g., ‘‘companion’’), one similar to the reduced form (e.g.,
‘‘pupil’’), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed
their attention to a similar degree to both competitors independent of the
target’s spoken form. In Experiment 2, we excluded reduced forms and
presented canonical forms only. In such a listening situation, participants
showed a clear preference for the ‘‘canonical form’’ competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude
that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system. -
Brouwer, S., Mitterer, H., & Huettig, F. (2012). Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations. Quarterly Journal of Experimental Psychology, 65, 2193-2220. doi:10.1080/17470218.2012.693109.
Abstract
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.
Share this page