Displaying 1 - 100 of 296
-
Alagöz, G., Eising, E., Mekki, Y., Bignardi, G., Fontanillas, P., 23andMe Research Team, Nivard, M. G., Luciano, M., Cox, N. J., Fisher, S. E., & Gordon, R. L. (2025). The shared genetic architecture and evolution of human language and musical rhythm. Nature Human Behaviour, 9, 376-390. doi:10.1038/s41562-024-02051-y.
Abstract
Rhythm and language-related traits are phenotypically correlated, but their genetic overlap is largely unknown. Here, we leveraged two large-scale genome-wide association studies performed to shed light on the shared genetics of rhythm (N=606,825) and dyslexia (N=1,138,870). Our results reveal an intricate shared genetic and neurobiological architecture, and lay groundwork for resolving longstanding debates about the potential co-evolution of human language and musical traits. -
Alcock, K., Meints, K., & Rowland, C. F. (2025). Gesture screening in young infants: Highly sensitive to risk factors for communication delay. International Journal of Language & Communication Disorders, 60(1): e13150. doi:10.1111/1460-6984.13150.
Abstract
Introduction
Children's early language and communication skills are efficiently measured using parent report, for example, communicative development inventories (CDIs). These have scalable potential to determine risk of later language delay, and associations between delay and risk factors such as prematurity and poverty. However, there may be measurement difficulties in parent reports, including anomalous directions of association between child age/socioeconomic status and reported language. Findings vary on whether parents may report older infants as having smaller vocabularies than younger infants, for example.
Methods
We analysed data from the UK Communicative Development Inventory (Words and Gestures); UK-CDI (W&G) to determine whether anomalous associations would be replicated in this population, and/or with gesture. In total 1204 families of children aged 8–18 months (598 girls, matched to UK population for income, parental education and ethnicity as far as possible) completed Vocabulary and Gesture scales of the UK-CDI (W&G).
Results
Overall scores on the Gesture scale showed more significant relationships with biological risk factors including prematurity than did Vocabulary scores. Gesture also showed more straightforward relationships with social risk factors including income. Relationships between vocabulary and social risk factors were less straightforward; some at-risk groups reported higher vocabulary scores than other groups.
Discussion
We conclude that vocabulary report may be less accurate than gesture for this age. Parents have greater knowledge of language than gesture milestones, hence may report expectations for vocabulary, not observed vocabulary. We also conclude that gesture should be included in early language scales partly because of its greater, more straightforward association with many risk factors for language delay.
-
Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.
Abstract
At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech. -
Ameka, F. K. (1999). [Review of M. E. Kropp Dakubu: Korle meets the sea: a sociolinguistic history of Accra]. Bulletin of the School of Oriental and African Studies, 62, 198-199. doi:10.1017/S0041977X0001836X.
-
Ameka, F. K. (1999). Interjections. In K. Brown, & J. Miller (
Eds. ), Concise encyclopedia of grammatical categories (pp. 213-216). Oxford: Elsevier. -
Ameka, F. K. (1999). Partir c'est mourir un peu: Universal and culture specific features of leave taking. RASK International Journal of Language and Communication, 9/10, 257-283.
-
Ameka, F. K. (1999). Spatial information packaging in Ewe and Likpe: A comparative perspective. Frankfurter Afrikanistische Blätter, 11, 7-34.
-
Ameka, F. K. (1999). The typology and semantics of complex nominal duplication in Ewe. Anthropological Linguistics, 41, 75-106.
-
Ameka, F. K., De Witte, C., & Wilkins, D. (1999). Picture series for positional verbs: Eliciting the verbal component in locative descriptions. In D. Wilkins (
Ed. ), Manual for the 1999 Field Season (pp. 48-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573831.Abstract
How do different languages encode location and position meanings? In conjunction with the BowPed picture series and Caused Positions task, this elicitation tool is designed to help researchers (i) identify a language’s resources for encoding topological relations; (ii) delimit the pragmatics of use of such resources; and (iii) determine the semantics of select spatial terms. The task focuses on the exploration of the predicative component of topological expressions (e.g., ‘the cassavas are lying in the basket’), especially the contrastive elicitation of positional verbs. The materials consist of a set of photographs of objects (e.g., bottles, cloths, sticks) in specific configurations with various ground items (e.g., basket, table, tree).Additional information
1999_Positional_verbs_stimuli.zip -
Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.
Abstract
Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD. -
Bastiaansen, M. C. M., Böcker, K. B. E., Cluitmans, P. J. M., & Brunia, C. H. M. (1999). Event-related desynchronization related to the anticipation of a stimulus providing knowledge of results. Clinical Neurophysiology, 110, 250-260.
Abstract
In the present paper, event-related desynchronization (ERD) in the alpha and beta frequency bands is quantified in order to investigate the processes related to the anticipation of a knowledge of results (KR) stimulus. In a time estimation task, 10 subjects were instructed to press a button 4 s after the presentation of an auditory stimulus. Two seconds after the response they received auditory or visual feedback on the timing of their response. Preceding the button press, a centrally maximal ERD is found. Preceding the visual KR stimulus, an ERD is present that has an occipital maximum. Contrary to expectation, preceding the auditory KR stimulus there are no signs of a modalityspecific ERD. Results are related to a thalamo-cortical gating model which predicts a correspondence between negative slow potentials and ERD during motor preparation and stimulus anticipation. -
Bauer, B. L. M. (2000). Archaic syntax in Indo-European: The spread of transitivity in Latin and French. Berlin: Mouton de Gruyter.
Abstract
Several grammatical features in early Indo-European traditionally have not been understood. Although Latin, for example, was a nominative language, a number of its inherited characteristics do not fit that typology and are difficult to account for, such as stative mihi est constructions to express possession, impersonal verbs, or absolute constructions. With time these archaic features have been replaced by transitive structures (e.g. possessive ‘have’). This book presents an extensive comparative and historical analysis of archaic features in early Indo-European languages and their gradual replacement in the history of Latin and early Romance, showing that the new structures feature transitive syntax and fit the patterns of a nominative language. -
Bauer, B. L. M. (1999). Aspects of impersonal constructions in Late Latin. In H. Petersmann, & R. Kettelmann (
Eds. ), Latin vulgaire – latin tardif V (pp. 209-211). Heidelberg: Winter. -
Bauer, B. L. M. (2000). From Latin to French: The linear development of word order. In B. Bichakjian, T. Chernigovskaya, A. Kendon, & A. Müller (
Eds. ), Becoming Loquens: More studies in language origins (pp. 239-257). Frankfurt am Main: Lang. -
Bauer, B. L. M. (1999). Impersonal HABET constructions: At the cross-roads of Indo-European innovation. In E. Polomé, & C. Justus (
Eds. ), Language change and typological variation. Vol II. Grammatical universals and typology (pp. 590-612). Washington: Institute for the study of man.Files private
Request files -
Bavin, E. L., & Kidd, E. (2000). Learning new verbs: Beyond the input. In C. Davis, T. J. Van Gelder, & R. Wales (
Eds. ), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society. -
Bignardi, G., Wesseldijk, L. W., Mas-Herrero, E., Zatorre, R. J., Ullén, F., Fisher, S. E., & Mosing, M. A. (2025). Twin modelling reveals partly distinct genetic pathways to music enjoyment. Nature Communications, 16: 2904. doi:10.1038/s41467-025-58123-8.
Abstract
Humans engage with music for various reasons that range from emotional regulation and relaxation to social bonding. While there are large inter-individual differences in how much humans enjoy music, little is known about the origins of those differences. Here, we disentangle the genetic factors underlying such variation. We collect data on several facets of music reward sensitivity, as measured by the Barcelona Music Reward Questionnaire, plus music perceptual abilities and general reward sensitivity from a large sample of Swedish twins (N = 9169; 2305 complete pairs). We estimate that genetic effects contribute up to 54% of the variability in music reward sensitivity, with 70% of these effects being independent of music perceptual abilities and general reward sensitivity. Furthermore, multivariate analyses show that genetic and environmental influences on the different facets of music reward sensitivity are partly distinct, uncovering distinct pathways to music enjoyment and different patterns of genetic associations with objectively assessed music perceptual abilities. These results paint a complex picture in which partially distinct sources of variation contribute to different aspects of musical enjoyment. -
Böcker, K. B. E., Bastiaansen, M. C. M., Vroomen, J., Brunia, C. H. M., & de Gelder, B. (1999). An ERP correlate of metrical stress in spoken word recognition. Psychophysiology, 36, 706-720. doi:10.1111/1469-8986.3660706.
Abstract
Rhythmic properties of spoken language such as metrical stress, that is, the alternation of strong and weak syllables, are important in speech recognition of stress-timed languages such as Dutch and English. Nineteen subjects listened passively to or discriminated actively between sequences of bisyllabic Dutch words, which started with either a weak or a strong syllable. Weak-initial words, which constitute 12% of the Dutch lexicon, evoked more negativity than strong-initial words in the interval between P2 and N400 components of the auditory event-related potential. This negativity was denoted as N325. The N325 was larger during stress discrimination than during passive listening. N325 was also larger when a weak-initial word followed sequence of strong-initial words than when it followed words with the same stress pattern. The latter difference was larger for listeners who performed well on stress discrimination. It was concluded that the N325 is probably a manifestation of the extraction of metrical stress from the acoustic signal and its transformation into task requirements. -
Bohnemeyer, J. (1999). A questionnaire on event integration. In D. Wilkins (
Ed. ), Manual for the 1999 Field Season (pp. 87-95). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002691.Abstract
How do we decide where events begin and end? Like the ECOM clips, this questionnaire is designed to investigate how a language divides and/or integrates complex scenarios into sub-events and macro-events. The questionnaire focuses on events of motion, caused state change (e.g., breaking), and transfer (e.g., giving). It provides a checklist of scenarios that give insight into where a language “draws the line” in event integration, based on known cross-linguistic differences. -
Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
-
Bohnemeyer, J. (1999). Event representation and event complexity: General introduction. In D. Wilkins (
Ed. ), Manual for the 1999 Field Season (pp. 69-73). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002741.Abstract
How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). This document introduces issues concerning the linguistic and cognitive representations of event complexity and integration, and provides an overview of tasks that are relevant to this topic, including the ECOM clips, the Questionnaire on Event integration, and the Questionnaire on motion lexicalisation and motion description. -
Bohnemeyer, J., & Caelen, M. (1999). The ECOM clips: A stimulus for the linguistic coding of event complexity. In D. Wilkins (
Ed. ), Manual for the 1999 Field Season (pp. 74-86). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874627.Abstract
How do we decide where events begin and end? In some languages it makes sense to say something like Dan broke the plate, but in other languages it is necessary to treat this action as a complex scenario composed of separate stages (Dan dropped the plate and then the plate broke). The “Event Complexity” (ECOM) clips are designed to explore how languages differ in dividing and/or integrating complex scenarios into sub-events and macro-events. The stimuli consist of animated clips of geometric shapes that participate in different scenarios (e.g., a circle “hits” a triangle and “breaks” it). Consultants are asked to describe the scenes, and then to comment on possible alternative descriptions.Additional information
1999_The_ECOM_clips.zip -
Bohnemeyer, J. (2000). Where do pragmatic meanings come from? In W. Spooren, T. Sanders, & C. van Wijk (
Eds. ), Samenhang in Diversiteit; Opstellen voor Leo Noorman, aangeboden bij gelegenheid van zijn zestigste verjaardag (pp. 137-153). -
Bowerman, M. (2000). Where do children's word meanings come from? Rethinking the role of cognition in early semantic development. In L. Nucci, G. Saxe, & E. Turiel (
Eds. ), Culture, thought and development (pp. 199-230). Mahwah, NJ: Lawrence Erlbaum. -
Brehm, L., Kennis, N., & Bergmann, C. (2025). When is a ranana a banana? Disentangling the mechanisms of error repair and word learning. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2025.2463082.
Abstract
When faced with an ambiguous novel word such as ‘ranana’, how do listeners decide whether they heard a mispronunciation of a familiar target (‘banana’) or a label for an unfamiliar novel item? We examined this question by combining visual-world eye-tracking with an offline forced-choice judgment paradigm. In two studies, we show evidence that participants entertain repair and novel label interpretations of novel words that were created by editing a familiar target word in multiple phonetic features (Experiment 1) or a single phonetic feature (Experiment 2). Repair (‘ranana’ = a banana) and learning (‘ranana’ = a novel referent) were both common interpretation strategies, and learning was strongly associated with visual attention to the novel image after it was referred to in a sentence. This indicates that repair and learning are both valid strategies for understanding novel words that depend upon a set of similar mechanisms, and suggests that attention during listening is causally related to whether one learns or repairs.Additional information
appendices -
Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.
Abstract
In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing. -
Brown, C. M., Van Berkum, J. J. A., & Hagoort, P. (2000). Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding. Journal of Psycholinguistic Research, 29(1), 53-68. doi:10.1023/A:1005172406969.
Abstract
A study is presented on the effects of discourse–semantic and lexical–syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior discourse–semantic information biased toward one analysis of the temporary ambiguity, whereas the lexical-syntactic information allowed only for the alternative analysis. The ERP results show that discourse–semantic information can momentarily take precedence over syntactic information, even if this violates grammatical gender agreement rules. -
Brown, C. M., Hagoort, P., & Chwilla, D. J. (2000). An event-related brain potential analysis of visual word priming effects. Brain and Language, 72, 158-190. doi:10.1006/brln.1999.2284.
Abstract
Two experiments are reported that provide evidence on task-induced effects during
visual lexical processing in a primetarget semantic priming paradigm. The research focuses on target expectancy effects by manipulating the proportion of semantically related and unrelated word pairs. In Experiment 1, a lexical decision task was used and reaction times (RTs) and event-related brain potentials (ERPs) were obtained. In Experiment 2, subjects silently read the stimuli, without any additional task demands, and ERPs were recorded. The RT and ERP results of Experiment 1 demonstrate that an expectancy mechanism contributed to the priming effect when a high proportion of related word pairs was presented. The ERP results of Experiment 2 show that in the absence of extraneous task requirements, an expectancy mechanism is not active. However, a standard ERP semantic priming effect was obtained in Experiment 2. The combined results show that priming effects due to relatedness proportion are induced by task demands and are not a standard aspect of online lexical processing. -
Brown, P. (1999). Anthropologie cognitive. Anthropologie et Sociétés, 23(3), 91-119.
Abstract
In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition. -
Brown, P. (2000). ’He descended legs-upwards‘: Position and motion in Tzeltal frog stories. In E. V. Clark (
Ed. ), Proceedings of the 30th Stanford Child Language Research Forum (pp. 67-75). Stanford: CSLI.Abstract
How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example: jipot jawal "he has been thrown (by the deer) lying_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in Frog Story narratives from Tzeltal adults and children, looks at their development in the narratives of children, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories. -
Brown, P., & Levinson, S. C. (2000). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In L. Nucci, G. Saxe, & E. Turiel (
Eds. ), Culture, thought, and development (pp. 167-197). Mahwah, NJ: Erlbaum. -
Brown, C. M., & Hagoort, P. (2000). On the electrophysiology of language comprehension: Implications for the human language system. In M. W. Crocker, M. Pickering, & C. Clifton jr. (
Eds. ), Architectures and mechanisms for language processing (pp. 213-237). Cambridge University Press. -
Brown, P. (1999). Repetition [Encyclopedia entry for 'Lexicon for the New Millenium', ed. Alessandro Duranti]. Journal of Linguistic Anthropology, 9(2), 223-226. doi:10.1525/jlin.1999.9.1-2.223.
Abstract
This is an encyclopedia entry describing conversational and interactional uses of linguistic repetition. -
Brown, C. M., & Hagoort, P. (1999). The cognitive neuroscience of language: Challenges and future directions. In C. M. Brown, & P. Hagoort (
Eds. ), The neurocognition of language (pp. 3-14). Oxford: Oxford University Press. -
Brown, P., & Levinson, S. C. (1999). Politeness: Some universals in language usage [Reprint]. In A. Jaworski, & N. Coupland (
Eds. ), The discourse reader (pp. 321-335). London: Routledge.Abstract
This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press). -
Brown, C. M., Hagoort, P., & Kutas, M. (2000). Postlexical integration processes during language comprehension: Evidence from brain-imaging research. In M. S. Gazzaniga (
Ed. ), The new cognitive neurosciences (2nd., pp. 881-895). Cambridge, MA: MIT Press. -
Bruggeman, L., Kidd, E., Nordlinger, R., & Cutler, A. (2025). Incremental processing in a polysynthetic language (Murrinhpatha). Cognition, 257: 106075. doi:10.1016/j.cognition.2025.106075.
Abstract
Language processing is rapidly incremental, but evidence bearing upon this assumption comes from very few languages. In this paper we report on a study of incremental processing in Murrinhpatha, a polysynthetic Australian language, which expresses complex sentence-level meanings in a single verb, the full meaning of which is not clear until the final morph. Forty native Murrinhpatha speakers participated in a visual world eyetracking experiment in which they viewed two complex scenes as they heard a verb describing one of the scenes. The scenes were selected so that the verb describing the target scene had either no overlap with a possible description of the competitor image, or overlapped from the start (onset overlap) or at the end of the verb (rhyme overlap). The results showed that, despite meaning only being clear at the end of the verb, Murrinhpatha speakers made incremental predictions that differed across conditions. The findings demonstrate that processing in polysynthetic languages is rapid and incremental, yet unlike in commonly studied languages like English, speakers make parsing predictions based on information associated with bound morphs rather than discrete words. -
Bujok, R., Meyer, A. S., & Bosker, H. R. (2025). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech, 68(1), 181-203. doi:10.1177/00238309241258162.
Abstract
Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts. -
Carlsson, K., Petrovic, P., Skare, S., Petersson, K. M., & Ingvar, M. (2000). Tickling expectations: Neural processing in anticipation of a sensory stimulus. Journal of Cognitive Neuroscience, 12(4), 691-703. doi:10.1162/089892900562318.
-
Chalfoun, A., Rossi, G., & Stivers, T. (2025). The magic word? Face-work and the functions of 'please' in everyday requests. Social Psychology Quarterly, 88(1), 66-88. doi:10.1177/01902725241245141.
Abstract
Expressions of politeness such as 'please' are prominent elements of interactional conduct that are explicitly targeted in early socialization and are subject to cultural expectations around socially desirable behavior. Yet their specific interactional functions remain poorly understood. Using conversation analysis supplemented with systematic coding, this study investigates when and where interactants use 'please' in everyday requests. We find that 'please' is rare, occurring in only 7 percent of request attempts. Interactants use 'please' to manage face-threats when a request is ill fitted to its immediate interactional context. Within this, we identify two environments in which 'please' prototypically occurs. First, 'please' is used when the requestee has demonstrated unwillingness to comply. Second, 'please' is used when the request is intrusive due to its incompatibility with the requestee’s engagement in a competing action trajectory. Our findings advance research on politeness and extend Goffman’s theory of face-work, with particular salience for scholarship on request behavior. -
Choi, S., McDonough, L., Bowerman, M., & Mandler, J. M. (1999). Early sensitivity to language-specific spatial categories in English and Korean. Cognitive Development, 14, 241-268. doi:10.1016/S0885-2014(99)00004-0.
Abstract
This study investigates young children’s comprehension of spatial terms in two languages that categorize space strikingly differently. English makes a distinction between actions resulting in containment (put in) versus support or surface attachment (put on), while Korean makes a cross-cutting distinction between tight-fit relations (kkita) versus loose-fit or other contact relations (various verbs). In particular, the Korean verb kkita refers to actions resulting in a tight-fit relation regardless of containment or support. In a preferential looking study we assessed the comprehension of in by 20 English learners and kkita by 10 Korean learners, all between 18 and 23 months. The children viewed pairs of scenes while listening to sentences with and without the target word. The target word led children to gaze at different and language-appropriate aspects of the scenes. We conclude that children are sensitive to language-specific spatial categories by 18–23 months. -
Ciulkinyte, A., Mountford, H. S., Fontanillas, P., 23andMe Research Team, Bates, T. C., Martin, N. G., Fisher, S. E., & Luciano, M. (2025). Genetic neurodevelopmental clustering and dyslexia. Molecular Psychiatry, 30, 140-150. doi:10.1038/s41380-024-02649-8.
Abstract
Dyslexia is a learning difficulty with neurodevelopmental origins, manifesting as reduced accuracy and speed in reading and spelling. It is substantially heritable and frequently co-occurs with other neurodevelopmental conditions, particularly attention deficit-hyperactivity disorder (ADHD). Here, we investigate the genetic structure underlying dyslexia and a range of psychiatric traits using results from genome-wide association studies of dyslexia, ADHD, autism, anorexia nervosa, anxiety, bipolar disorder, major depressive disorder, obsessive compulsive disorder,
schizophrenia, and Tourette syndrome. Genomic Structural Equation Modelling (GenomicSEM) showed heightened support for a model consisting of five correlated latent genomic factors described as: F1) compulsive disorders (including obsessive-compulsive disorder, anorexia nervosa, Tourette syndrome), F2) psychotic disorder (including bipolar disorder, schizophrenia), F3) internalising disorders (including anxiety disorder, major depressive disorder), F4) neurodevelopmental traits (including autism, ADHD), and F5) attention and learning difficulties (including ADHD, dyslexia). ADHD loaded more strongly on the attention and learning difficulties latent factor (F5) than on the neurodevelopmental traits latent factor (F4). The attention and learning difficulties latent factor (F5) was positively correlated with internalising disorders (.40), neurodevelopmental traits (.25) and psychotic disorders (.17) latent factors, and negatively correlated with the compulsive disorders (–.16) latent factor. These factor correlations are mirrored in genetic correlations observed between the attention and learning difficulties latent factor and other cognitive, psychological and wellbeing traits. We further investigated genetic variants underlying both dyslexia and ADHD, which implicated 49 loci (40 not previously found in GWAS of the individual traits) mapping to 174 genes (121 not found in GWAS of individual traits) as potential pleiotropic variants. Our study confirms the increased genetic relation between dyslexia and ADHD versus other psychiatric traits and uncovers novel pleiotropic variants affecting both traits. In future, analyses including additional co-occurring traits such as dyscalculia and dyspraxia will allow a clearer definition of the attention and learning difficulties latent factor, yielding further insights into factor structure and pleiotropic effects. -
Clifton, Jr., C., Cutler, A., McQueen, J. M., & Van Ooijen, B. (1999). The processing of inflected forms. [Commentary on H. Clahsen: Lexical entries and rules of language.]. Behavioral and Brain Sciences, 22, 1018-1019.
Abstract
Clashen proposes two distinct processing routes, for regularly and irregularly inflected forms, respectively, and thus is apparently making a psychological claim. We argue his position, which embodies a strictly linguistic perspective, does not constitute a psychological processing model. -
Coopmans, C. W., De Hoop, H., Tezcan, F., Hagoort, P., & Martin, A. E. (2025). Language-specific neural dynamics extend syntax into the time domain. PLOS Biology, 23: e3002968. doi:10.1371/journal.pbio.3002968.
Abstract
Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.e., abstract and not bound by time), so they must be translated into time-varying signals for speech comprehension and production. Here, we test 3 different models of the temporal spell-out of syntactic structure against brain activity of people listening to Dutch stories: an integratory bottom-up parser, a predictive top-down parser, and a mildly predictive left-corner parser. These models build exactly the same structure but differ in when syntactic information is added by the brain—this difference is captured in the (temporal distribution of the) complexity metric “incremental node count.” Using temporal response function models with both acoustic and information-theoretic control predictors, node counts were regressed against source-reconstructed delta-band activity acquired with magnetoencephalography. Neural dynamics in left frontal and temporal regions most strongly reflect node counts derived by the top-down method, which postulates syntax early in time, suggesting that predictive structure building is an important component of Dutch sentence comprehension. The absence of strong effects of the left-corner model further suggests that its mildly predictive strategy does not represent Dutch language comprehension well, in contrast to what has been found for English. Understanding when the brain projects its knowledge of syntax onto speech, and whether this is done in language-specific ways, will inform and constrain the development of mechanistic models of syntactic structure building in the brain. -
Cutler, A., & Clifton, Jr., C. (1999). Comprehending spoken language: A blueprint of the listener. In C. M. Brown, & P. Hagoort (
Eds. ), The neurocognition of language (pp. 123-166). Oxford University Press. -
Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.
Abstract
Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.Additional information
https://www.mpi.nl/world/persons/private/anne/wordrecon.html -
Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.
Abstract
Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is. -
Cutler, A. (1999). Foreword. In Slips of the Ear: Errors in the perception of Casual Conversation (pp. xiii-xv). New York City, NY, USA: Academic Press.
-
Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
-
Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
-
Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
-
Cutler, A. (1999). Prosodische Struktur und Worterkennung bei gesprochener Sprache. In A. D. Friedrici (
Ed. ), Enzyklopädie der Psychologie: Sprachrezeption (pp. 49-83). Göttingen: Hogrefe. -
Cutler, A. (1999). Prosody and intonation, processing issues. In R. A. Wilson, & F. C. Keil (
Eds. ), MIT encyclopedia of the cognitive sciences (pp. 682-683). Cambridge, MA: MIT Press. -
Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (
Eds. ), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC. -
Cutler, A., & Norris, D. (1999). Sharpening Ockham’s razor (Commentary on W.J.M. Levelt, A. Roelofs & A.S. Meyer: A theory of lexical access in speech production). Behavioral and Brain Sciences, 22, 40-41.
Abstract
Language production and comprehension are intimately interrelated; and models of production and comprehension should, we argue, be constrained by common architectural guidelines. Levelt et al.'s target article adopts as guiding principle Ockham's razor: the best model of production is the simplest one. We recommend adoption of the same principle in comprehension, with consequent simplification of some well-known types of models. -
Cutler, A. (1999). Spoken-word recognition. In R. A. Wilson, & F. C. Keil (
Eds. ), MIT encyclopedia of the cognitive sciences (pp. 796-798). Cambridge, MA: MIT Press. -
Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (
Eds. ), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.Abstract
Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.Additional information
https://www.mpi.nl/world/persons/private/anne/ICSLP00.html -
Cutler, A., & Otake, T. (1999). Pitch accent in spoken-word recognition in Japanese. Journal of the Acoustical Society of America, 105, 1877-1888.
Abstract
Three experiments addressed the question of whether pitch-accent information may be exploited in the process of recognizing spoken words in Tokyo Japanese. In a two-choice classification task, listeners judged from which of two words, differing in accentual structure, isolated syllables had been extracted ~e.g., ka from baka HL or gaka LH!; most judgments were correct, and listeners’ decisions were correlated with the fundamental frequency characteristics of the syllables. In a gating experiment, listeners heard initial fragments of words and guessed what the words were; their guesses overwhelmingly had the same initial accent structure as the gated word even when only the beginning CV of the stimulus ~e.g., na- from nagasa HLL or nagashi LHH! was presented. In addition, listeners were more confident in guesses with the same initial accent structure as the stimulus than in guesses with different accent. In a lexical decision experiment, responses to spoken words ~e.g., ame HL! were speeded by previous presentation of the same word ~e.g., ame HL! but not by previous presentation of a word differing only in accent ~e.g., ame LH!. Together these findings provide strong evidence that accentual information constrains the activation and selection of candidates for spoken-word recognition. -
Cutler, A., Van Ooijen, B., & Norris, D. (1999). Vowels, consonants, and lexical activation. In J. Ohala, Y. Hasegawa, M. Ohala, D. Granville, & A. Bailey (
Eds. ), Proceedings of the Fourteenth International Congress of Phonetic Sciences: Vol. 3 (pp. 2053-2056). Berkeley: University of California.Abstract
Two lexical decision studies examined the effects of single-phoneme mismatches on lexical activation in spoken-word recognition. One study was carried out in English, and involved spoken primes and visually presented lexical decision targets. The other study was carried out in Dutch, and primes and targets were both presented auditorily. Facilitation was found only for spoken targets preceded immediately by spoken primes; no facilitation occurred when targets were presented visually, or when intervening input occurred between prime and target. The effects of vowel mismatches and consonant mismatches were equivalent. -
Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (
Eds. ), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.Abstract
Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model. -
Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.
Abstract
Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience. -
Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.
Abstract
Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse contextFiles private
Request files -
Donnelly, S., Kidd, E., Verkuilen, J., & Rowland, C. F. (2025). The separability of early vocabulary and grammar knowledge. Journal of Memory and Language, 141: 104586. doi:10.1016/j.jml.2024.104586.
Abstract
A long-standing question in language development concerns the nature of the relationship between early lexical and grammatical knowledge. The very strong correlation between the two has led some to argue that lexical and grammatical knowledge may be inseparable, consistent with psycholinguistic theories that eschew a distinction between the two systems. However, little research has explicitly examined whether early lexical and grammatical knowledge are statistically separable. Moreover, there are two under-appreciated methodological challenges in such research. First, the relationship between lexical and grammatical knowledge may change during development. Second, non-linear mappings between true and observed scores on scales of lexical and grammatical knowledge could lead to spurious multidimensionality. In the present study, we overcome these challenges by using vocabulary and grammar data from several developmental time points and a statistical method robust to such non-linear mappings. In Study 1, we examined item-level vocabulary and grammar data from two American English samples from a large online repository of data from studies employing a commonly used language development scale. We found clear evidence that vocabulary and grammar were separable by two years of age. In Study 2, we combined data from two longitudinal studies of language acquisition that used the same scale (at 18/19, 21, 24 and 30 months) and found evidence that vocabulary and grammar were, under some conditions, separable by 18 months. Results indicate that, while there is clearly a very strong relationship between vocabulary and grammar knowledge in early language development, the two are separable. Implications for the mechanisms underlying language development are discussed. -
Drijvers, L., Small, S. L., & Skipper, J. I. (2025). Language is widely distributed throughout the brain. Nature Reviews Neuroscience, 26: 189. doi:10.1038/s41583-024-00903-0.
-
Dulyan, L., Guzmán Chacón, E. G., & Forkel, S. J. (2025). Navigating neuroanatomy. In J. H. Grafman (
Ed. ), Encyclopedia of the human brain (2nd ed.). Amsterdam: Elsevier.Abstract
This chapter introduces the origins and development of our current anatomical terminology. It scrutinizes the historical evolution and etymological significance of the over 1900 official anatomical terms in the current nomenclature, underscoring their impact on the contemporary comprehension of cognitive neuroanatomy. The chapter traces unification efforts from the Basel Nomina Anatomica in 1895 to the 1998 Terminologia Anatomica, noting challenges arising from outdated terminology in light of recent anatomical advancements.
Highlighting the influence of terminologies on interpretations of brain anatomy, the chapter explores several anatomical mapping methods such as surface, sectional, connectional, and functional anatomy. It illuminates discrepancies and controversies, exemplified by divergent interpretations of the number of brain lobes and the definitions of 'Broca' and 'Wernicke' areas.
The chapter explores anatomical terms' historical and cultural underpinnings, encompassing mythonyms, eponyms, and cultural influences on nomenclature. It critically examines the implications of these terminologies on contemporary research and shows that Large Language Models mirror these discrepancies. It underscores the need for more inclusive and culturally sensitive approaches in anatomical education.
Lastly, we advocate for updating anatomical nomenclature, suggesting that a deeper understanding of these terminologies could provide insights and aid in resolving ongoing debates in the field. This examination sheds light on historical knowledge and emphasizes the dynamic interplay between language, culture, and anatomy in shaping our comprehension of the neurobiology of the brain and how we navigate neuroanatomy in the 21st century. -
Dulyan, L., Bortolami, C., & Forkel, S. J. (2025). Asymmetries in the human brain. In C. Papagno, & P. Corballis (
Eds. ), Cerebral Asymmetries: Handbook of Clinical Neurology (pp. 15-36). Amsterdam: Elsevier.Abstract
The human brain is an intricate network of cortical regions interconnected by white matter pathways, dynamically supporting cognitive functions. While cortical asymmetries have been consistently reported, the asymmetry of white matter connections remains less explored. This chapter provides a brief overview of asymmetries observed at the cortical, subcortical, cytoarchitectural, and receptor levels before exploring the detailed connectional anatomy of the human brain. It thoroughly examines the lateralization and interindividual variability of 56 distinct white matter tracts, offering a comprehensive review of their structural characteristics and interindividual variability. Additionally, we provide an extensive update on the asymmetry of a wide range of white matter tracts using high-resolution data from the Human Connectome Project (7T HCP www.humanconnectome.org). Future research and advanced quantitative analyses are crucial to understanding fully how asymmetry contributes to interindividual variability. This comprehensive exploration enhances our understanding of white matter organization and its potential implications for brain function. -
Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.
Abstract
This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language. -
Dylman, A. S., Champoux-Larsson, M.-F., & Frances, C. (2025). Prosody! When intonation helps and there is an effect… on listening comprehension in children. Educational Psychology, 45(1), 1-17. doi:10.1080/01443410.2024.2446778.
Abstract
We report four experiments investigating the effect of prosody on listening comprehension in 11-13-year-old children. Across all experiments, participants listened to short object descriptions and answered content-based questions about said objects. In Experiments 1-3, the descriptions were read in an emotionally positive or neutral tone of voice. In Experiment 4, the descriptions were read by a neutral human voice or by text-to-speech software. The results from Experiments 1-3 consistently showed higher accuracy (i.e. more correct answers to the questions) when the descriptions were read using positive prosody. Experiment 4 found higher accuracy for the human voice compared to the text-to-speech recordings. The human voice was also rated as more pleasant and easier to understand than the text-to-speech voice. In sum, this study found that positive, compared to neutral, prosody, and a human voice, compared to artificial speech synthesis, can improve listening comprehension, showcasing the role of prosody in listening comprehension. -
Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
-
Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (
Eds. ), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.Files private
Request files -
Eisenbeiss, S., McGregor, B., & Schmidt, C. M. (1999). Story book stimulus for the elicitation of external possessor constructions and dative constructions ('the circle of dirt'). In D. Wilkins (
Ed. ), Manual for the 1999 Field Season (pp. 140-144). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002750.Abstract
How involved in an event is a person that possesses one of the event participants? Some languages can treat such “external possessors” as very closely involved, even marking them on the verb along with core roles such as subject and object. Other languages only allow possessors to be expressed as non-core participants. This task explores possibilities for the encoding of possessors and other related roles such as beneficiaries. The materials consist of a sequence of thirty drawings designed to elicit target construction types.Additional information
1999_Story_book_booklet.pdf -
Emmendorfer, A. K., & Holler, J. (2025). Facial signals shape predictions about the nature of upcoming conversational responses. Scientific Reports, 15: 1381. doi:10.1038/s41598-025-85192-y.
Abstract
Increasing evidence suggests that interlocutors use visual communicative signals to form predictions about unfolding utterances, but there is little data on the predictive potential of facial signals in conversation. In an online experiment with virtual agents, we examine whether facial signals produced by an addressee may allow speakers to anticipate the response to a question before it is given. Participants (n = 80) viewed videos of short conversation fragments between two virtual humans. Each fragment ended with the Questioner asking a question, followed by a pause during which the Responder looked either straight at the Questioner (baseline), or averted their gaze, or accompanied the straight gaze with one of the following facial signals: brow raise, brow frown, nose wrinkle, smile, squint, mouth corner pulled back (dimpler). Participants then indicated on a 6-point scale whether they expected a “yes” or “no” response. Analyses revealed that all signals received different ratings relative to the baseline: brow raises, dimplers, and smiles were associated with more positive responses, gaze aversions, brow frowns, nose wrinkles, and squints with more negative responses. Qur findings show that interlocutors may form strong associations between facial signals and upcoming responses to questions, highlighting their predictive potential in face-to-face conversation.Additional information
supplementary materials -
Enfield, N. J. (1999). Lao as a national language. In G. Evans (
Ed. ), Laos: Culture and society (pp. 258-290). Chiang Mai: Silkworm Books. -
Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (
Eds. ), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins. -
Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
-
Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.
Abstract
The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia. -
Enfield, N. J., & Evans, G. (2000). Transcription as standardisation: The problem of Tai languages. In S. Burusphat (
Ed. ), Proceedings: the International Conference on Tai Studies, July 29-31, 1998, (pp. 201-212). Bangkok, Thailand: Institute of Language and Culture for Rural Development, Mahidol University. -
Esmer, Ş. C., Turan, E., Karadöller, D. Z., & Göksun, T. (2025). Sources of variation in preschoolers’ relational reasoning: The interaction between language use and working memory. Journal of Experimental Child Psychology, 252: 106149. doi:10.1016/j.jecp.2024.106149.
Abstract
Previous research has suggested the importance of relational language and working memory in children’s relational reasoning. The tendency to use language (e.g., using more relational than object-focused language, prioritizing focal objects over background in linguistic descriptions) could reflect children’s biases toward the relational versus object-based solutions in a relational match-to-sample (RMTS) task. In the lack of any apparent object match as a foil option, object-focused children might rely on other cognitive mechanisms (i.e., working memory) to choose a relational match in the RMTS task. The current study examined the interactive roles of language- and working memory-related sources of variation in Turkish-learning preschoolers’ relational reasoning. We collected data from 4- and 5-year-olds (N = 41) via Zoom in the RMTS task, a scene description task, and a backward word span task. Generalized binomial mixed effects models revealed that children who used more relational language and background-focused scene descriptions performed worse in the relational reasoning task. Furthermore, children with less frequent relational language use and focal object descriptions of the scenes benefited more from working memory to succeed in the relational reasoning task. These results suggest additional working memory demands for object-focused children to choose relational matches in the RMTS task, highlighting the importance of examining the interactive effects of different cognitive mechanisms on relational reasoning.Additional information
supplementary material -
Essegbey, J. (1999). Inherent complement verbs revisited: Towards an understanding of argument structure in Ewe. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057668.
-
Ferrari, A., & Hagoort, P. (2025). Beat gestures and prosodic prominence interactively influence language comprehension. Cognition, 256: 106049. doi:10.1016/j.cognition.2024.106049.
Abstract
Face-to-face communication is not only about ‘what’ is said but also ‘how’ it is said, both in speech and bodily signals. Beat gestures are rhythmic hand movements that typically accompany prosodic prominence in con-versation. Yet, it is still unclear how beat gestures influence language comprehension. On the one hand, beat gestures may share the same functional role of focus markers as prosodic prominence. Accordingly, they would drive attention towards the concurrent speech and highlight its content. On the other hand, beat gestures may trigger inferences of high speaker confidence, generate the expectation that the sentence content is correct and thereby elicit the commitment to the truth of the statement. This study directly disentangled the two hypotheses by evaluating additive and interactive effects of prosodic prominence and beat gestures on language comprehension. Participants watched videos of a speaker uttering sentences and judged whether each sentence was true or false. Sentences sometimes contained a world knowledge violation that may go unnoticed (‘semantic illusion’). Combining beat gestures with prosodic prominence led to a higher degree of semantic illusion, making more world knowledge violations go unnoticed during language comprehension. These results challenge current theories proposing that beat gestures are visual focus markers. To the contrary, they suggest that beat gestures automatically trigger inferences of high speaker confidence and thereby elicit the commitment to the truth of the statement, in line with Grice’s cooperative principle in conversation. More broadly, our findings also highlight the influence of metacognition on language comprehension in face-to-face ommunication. -
Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.
Abstract
Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability. -
Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.
Abstract
Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia. -
Forkel, S. J., Bortolami, C., Dulyan, L., Barrett, R. L. C., & Beyh, A. (2025). Dissecting white matter pathways: A neuroanatomical approach. In F. Dell'Acqua, M. Descoteaux, & A. Leemans (
Eds. ), Handbook of Diffusion MR Tractography (pp. 397-421). Amsterdam: Elsevier.Abstract
The brain is the most magnificent structure, and we are only at the cusp of unraveling some of its complexity. Neuroanatomy is the best tool to map the brain's structural complexity. As such, neuroanatomy is not just an academic exercise; it serves our fundamental understanding of the neurobiology of cognition and improves clinical practice. A deepened anatomical understanding has advanced our conceptual grasp of the evolution of the brain, interindividual variability of cognition in health and disease, and the conceptual shift toward the emergence of cognition. For the past 20 years, diffusion imaging tractography has dramatically facilitated these advances by enabling the study of the delicate networks that orchestrate brain processes (for review, see Thiebaut de Schotten and Forkel, 2022). Several steps are consistent across all studied populations and brain states (health/disease) when analyzing tractography data. We discuss various considerations for dissections across populations and give practical tips on common pitfalls and features to improve the visualization of the dissections. We briefly discuss specific considerations for manual dissections in nonhuman primates. Lastly, we provide an atlas of regions of interest (ROIs) for the most commonly delineated white matter connections in the human brain. -
Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.
Abstract
Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability. -
Pu, Y., Francks, C., & Kong, X. (2025). Global brain asymmetry. Trends in Cognitive Sciences, 29(2), 114-117. doi:10.1016/j.tics.2024.10.008.
Abstract
Lateralization is a defining characteristic of the human brain, often studied through localized approaches that focus on interhemispheric differences between homologous pairs of regions. It is also important to emphasize an integrative perspective of global brain asymmetry, in which hemispheric differences are understood through global patterns across the entire brain. -
Galke, L., & Raviv, L. (2025). Learning and communication pressures in neural networks: Lessons from emergent communication. Language Development Research, 5(1), 116-143. doi:10.34842/3vr5-5r49.
Abstract
Finding and facilitating commonalities between the linguistic behaviors of large language models and humans could lead to major breakthroughs in our understanding of the acquisition, processing, and evolution of language. However, most findings on human–LLM similarity can be attributed to training on human data. The field of emergent machine-to-machine communication provides an ideal testbed for discovering which pressures are neural agents naturally exposed to when learning to communicate in isolation, without any human language to start with. Here, we review three cases where mismatches between the emergent linguistic behavior of neural agents and humans were resolved thanks to introducing theoretically-motivated inductive biases. By contrasting humans, large language models, and emergent communication agents, we then identify key pressures at play for language learning and emergence: communicative success, production effort, learnability, and other psycho-/sociolinguistic factors. We discuss their implications and relevance to the field of language evolution and acquisition. By mapping out the necessary inductive biases that make agents' emergent languages more human-like, we not only shed light on the underlying principles of human cognition and communication, but also inform and improve the very use of these models as valuable scientific tools for studying language learning, processing, use, and representation more broadly. -
Göksun, T., Aktan-Erciyes, A., Karadöller, D. Z., & Demir-Lira, Ö. E. (2025). Multifaceted nature of early vocabulary development: Connecting child characteristics with parental input types. Child Development Perspectives, 19(1), 30-37. doi:10.1111/cdep.12524.
Abstract
Children need to learn the demands of their native language in the early vocabulary development phase. In this dynamic process, parental multimodal input may shape neurodevelopmental trajectories while also being tailored by child-related factors. Moving beyond typically characterized group profiles, in this article, we synthesize growing evidence on the effects of parental multimodal input (amount, quality, or absence), domain-specific input (space and math), and language-specific input (causal verbs and sound symbols) on preterm, full-term, and deaf children's early vocabulary development, focusing primarily on research with children learning Turkish and Turkish Sign Language. We advocate for a theoretical perspective, integrating neonatal characteristics and parental input, and acknowledging the unique constraints of languages. -
Goral, M., Antolovic, K., Hejazi, Z., & Schulz, F. M. (2025). Using a translanguaging framework to examine language production in a trilingual person with aphasia. Clinical Linguistics & Phonetics, 39(1), 1-20. doi:10.1080/02699206.2024.2328240.
Abstract
When language abilities in aphasia are assessed in clinical and research settings, the standard practice is to examine each language of a multilingual person separately. But many multilingual individuals, with and without aphasia, mix their languages regularly when they communicate with other speakers who share their languages. We applied a novel approach to scoring language production of a multilingual person with aphasia. Our aim was to discover whether the assessment outcome would differ meaningfully when we count accurate responses in only the target language of the assessment session versus when we apply a translanguaging framework, that is, count all accurate responses, regardless of the language in which they were produced. The participant is a Farsi-German-English speaking woman with chronic moderate aphasia. We examined the participant’s performance on two picture-naming tasks, an answering wh-question task, and an elicited narrative task. The results demonstrated that scores in English, the participant’s third-learned and least-impaired language did not differ between the two scoring methods. Performance in German, the participant’s moderately impaired second language benefited from translanguaging-based scoring across the board. In Farsi, her weakest language post-CVA, the participant’s scores were higher under the translanguaging-based scoring approach in some but not all of the tasks. Our findings suggest that whether a translanguaging-based scoring makes a difference in the results obtained depends on relative language abilities and on pragmatic constraints, with additional influence of the linguistic distances between the languages in question. -
Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.
Abstract
Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model. -
Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.
Abstract
To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.Files private
Request files -
Gullberg, M., & Holmqvist, K. (1999). Keeping an eye on gestures: Visual perception of gestures in face-to-face communication. Pragmatics & Cognition, 7(1), 35-63. doi:10.1075/pc.7.1.04gul.
Abstract
Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related. -
Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (
Eds. ), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.Abstract
Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue. -
Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
-
Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.
Abstract
Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN. -
Hagoort, P. (1999). De toekomstige eeuw zonder psychologie. Psychologie Magazine, 18, 35-36.
-
Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.
Abstract
In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
-
Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.
Abstract
In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal. -
Hagoort, P., & Brown, C. M. (1999). Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research, 28(6), 715-728. doi:10.1023/A:1023277213129.
Abstract
The central issue of this study concerns the claim that the processing of gender agreement in online sentence comprehension is a syntactic rather than a conceptual/semantic process. This claim was tested for the grammatical gender agreement in Dutch between the definite article and the noun. Subjects read sentences in which the definite article and the noun had the same gender and sentences in which the gender agreement was violated, While subjects read these sentences, their electrophysiological activity was recorded via electrodes placed on the scalp. Earlier research has shown that semantic and syntactic processing events manifest themselves in different event-related brain potential (ERP) effects. Semantic integration modulates the amplitude of the so-called N400.The P600/SPS is an ERP effect that is more sensitive to syntactic processes. The violation of grammatical gender agreement was found to result in a P600/SPS. For violations in sentence-final position, an additional increase of the N400 amplitude was observed. This N400 effect is interpreted as resulting from the consequence of a syntactic violation for the sentence-final wrap-up. The overall pattern of results supports the claim that the on-line processing of gender agreement information is not a content driven but a syntactic-form driven process. -
Hagoort, P., & Brown, C. M. (1999). The consequences of the temporal interaction between syntactic and semantic processes for haemodynamic studies of language. NeuroImage, 9, S1024-S1024.
-
Hagoort, P., Brown, C. M., & Osterhout, L. (1999). The neurocognition of syntactic processing. In C. M. Brown, & P. Hagoort (
Eds. ), The neurocognition of language (pp. 273-317). Oxford: Oxford University Press.
Share this page