Displaying 401 - 500 of 559
-
Ronderos, C. R., Aparicio, H., Long, M., Shukla, V., Jara-Ettinger, J., & Rubio-Fernandez, P. (2024). Perceptual, semantic, and pragmatic factors affect the derivation of contrastive inferences. Open mind: discoveries in cognitive science, 8, 1213-1227. doi:10.1162/opmi_a_00165.
Abstract
People derive contrastive inferences when interpreting adjectives (e.g., inferring that ‘the short pencil’ is being contrasted with a longer one). However, classic eye-tracking studies revealed contrastive inferences with scalar and material adjectives, but not with color adjectives. This was explained as a difference in listeners’ informativity expectations, since color adjectives are often used descriptively (hence not warranting a contrastive interpretation). Here we hypothesized that, beyond these pragmatic factors, perceptual factors (i.e., the relative perceptibility of color, material and scalar contrast) and semantic factors (i.e., the difference between gradable and non-gradable properties) also affect the real-time derivation of contrastive inferences. We tested these predictions in three languages with prenominal modification (English, Hindi, and Hungarian) and found that people derive contrastive inferences for color and scalar adjectives, but not for material adjectives. In addition, the processing of scalar adjectives was more context dependent than that of color and material adjectives, confirming that pragmatic, perceptual and semantic factors affect the derivation of contrastive inferences.
-
Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function, 229, 2097-2113. doi:10.1007/s00429-024-02801-8.
Abstract
Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system. -
Rowland, C. F., & Fletcher, S. L. (2006). The effect of sampling on estimates of lexical specificity and error rates. Journal of Child Language, 33(4), 859-877. doi:10.1017/S0305000906007537.
Abstract
Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech. -
Rowland, C. F., Bidgood, A., Jones, G., Jessop, A., Stinson, P., Pine, J. M., Durrant, S., & Peter, M. S. (2024). Simulating the relationship between nonword repetition performance and vocabulary growth in 2-Year-olds: Evidence from the language 0–5 project. Language Learning. Advance online publication. doi:10.1111/lang.12671.
Abstract
A strong predictor of children's language is performance on non-word repetition (NWR) tasks. However, the basis of this relationship remains unknown. Some suggest that NWR tasks measure phonological working memory, which then affects language growth. Others argue that children's knowledge of language/language experience affects NWR performance. A complicating factor is that most studies focus on school-aged children, who have already mastered key language skills. Here, we present a new NWR task for English-learning 2-year-olds, use it to assess the effect of NWR performance on concurrent and later vocabulary development, and compare the children's performance with that of an experience-based computational model (CLASSIC). The new NWR task produced reliable results; replicating wordlikeness effects, word-length effects, and the relationship with concurrent and later language ability we see in older children. The model also simulated all effects, suggesting that the relationship between vocabulary and NWR performance can be explained by language experience-/knowledge-based theories. -
Rubianes, M., Drijvers, L., Muñoz, F., Jiménez-Ortega, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2024). The self-reference effect can modulate language syntactic processing even without explicit awareness: An electroencephalography study. Journal of Cognitive Neuroscience, 36(3), 460-474. doi:10.1162/jocn_a_02104.
Abstract
Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150–550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing. -
Rubio-Fernández, P. (2024). Cultural evolutionary pragmatics: Investigating the codevelopment and coevolution of language and social cognition. Psychological Review, 131(1), 18-35. doi:10.1037/rev0000423.
Abstract
Language and social cognition come together in communication, but their relation has been intensely contested. Here, I argue that these two distinctively human abilities are connected in a positive feedback loop, whereby the development of one cognitive skill boosts the development of the other. More specifically, I hypothesize that language and social cognition codevelop in ontogeny and coevolve in diachrony through the acquisition, mature use, and cultural evolution of reference systems (e.g., demonstratives: “this” vs. “that”; articles: “a” vs. “the”; pronouns: “I” vs. “you”). I propose to study the connection between reference systems and communicative social cognition across three parallel timescales—language acquisition, language use, and language change, as a new research program for cultural evolutionary pragmatics. Within that framework, I discuss the coevolution of language and communicative social cognition as cognitive gadgets, and introduce a new methodological approach to study how universals and cross-linguistic differences in reference systems may result in different developmental pathways to human social cognition. -
Rubio-Fernandez, P., Long, M., Shukla, V., Bhatia, V., Mahapatra, A., Ralekar, C., Ben-Ami, S., & Sinha, P. (2024). Multimodal communication in newly sighted children: An investigation of the relation between visual experience and pragmatic development. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2560-2567).Abstract
We investigated the relationship between visual experience and pragmatic development by testing the socio-communicative skills of a unique population: the Prakash children of India, who received treatment for congenital cataracts after years of visual deprivation. Using two different referential communication tasks, our study investigated Prakash' children ability to produce sufficiently informative referential expressions (e.g., ‘the green pear' or ‘the small plate') and pay attention to their interlocutor's face during the task (Experiment 1), as well as their ability to recognize a speaker's referential intent through non-verbal cues such as head turning and pointing (Experiment 2). Our results show that Prakash children have strong pragmatic skills, but do not look at their interlocutor's face as often as neurotypical children do. However, longitudinal analyses revealed an increase in face fixations, suggesting that over time, Prakash children come to utilize their improved visual skills for efficient referential communication.Additional information
link to eScholarship -
De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3), 515-535.
Abstract
A key mechanism in the organization of turns at talk in conversation is the ability to anticipate or PROJECT the moment of completion of a current speaker’s turn. Some authors suggest that this is achieved via lexicosyntactic cues, while others argue that projection is based on intonational contours. We tested these hypotheses in an on-line experiment, manipulating the presence of symbolic (lexicosyntactic) content and intonational contour of utterances recorded in natural conversations. When hearing the original recordings, subjects can anticipate turn endings with the same degree of accuracy attested in real conversation. With intonational contour entirely removed (leaving intact words and syntax, with a completely flat pitch), there is no change in subjects’ accuracy of end-of-turn projection. But in the opposite case (with original intonational contour intact, but with no recognizable words), subjects’ performance deteriorates significantly. These results establish that the symbolic (i.e. lexicosyntactic) content of an utterance is necessary (and possibly sufficient) for projecting the moment of its completion, and thus for regulating conversational turn-taking. By contrast, and perhaps surprisingly, intonational contour is neither necessary nor sufficient for end-of-turn projection. -
De Ruiter, J. P. (2006). Can gesticulation help aphasic people speak, or rather, communicate? Advances in Speech-Language Pathology, 8(2), 124-127. doi:10.1080/14417040600667285.
Abstract
As Rose (2006) discusses in the lead article, two camps can be identified in the field of gesture research: those who believe that gesticulation enhances communication by providing extra information to the listener, and on the other hand those who believe that gesticulation is not communicative, but rather that it facilitates speaker-internal word finding processes. I review a number of key studies relevant for this controversy, and conclude that the available empirical evidence is supporting the notion that gesture is a communicative device which can compensate for problems in speech by providing information in gesture. Following that, I discuss the finding by Rose and Douglas (2001) that making gestures does facilitate word production in some patients with aphasia. I argue that the gestures produced in the experiment by Rose and Douglas are not guaranteed to be of the same kind as the gestures that are produced spontaneously under naturalistic, communicative conditions, which makes it difficult to generalise from that particular study to general gesture behavior. As a final point, I encourage researchers in the area of aphasia to put more emphasis on communication in naturalistic contexts (e.g., conversation) in testing the capabilities of people with aphasia. -
Sánchez-de la Vega, G., Gasca-Pineda, J., Martínez-Cárdenas, A., Vernes, S. C., Teeling, E. C., Mai, M., Aguirre-Planter, E., Eguiarte, L. E., Phillips, C. D., & Ortega, J. (2024). The genome sequence of the endemic Mexican common mustached Bat, Pteronotus mexicanus. Miller, 1902 [Mormoopidae; Pteronotus]. Gene, 929: 148821. doi:10.1016/j.gene.2024.148821.
Abstract
We describe here the first characterization of the genome of the bat Pteronotus mexicanus, an endemic species of Mexico, as part of the Mexican Bat Genome Project which focuses on the characterization and assembly of the genomes of endemic bats in Mexico. The genome was assembled from a liver tissue sample of an adult male from Jalisco, Mexico provided by the Texas Tech University Museum tissue collection. The assembled genome size was 1.9 Gb. The assembly of the genome was fitted in a framework of 110,533 scaffolds and 1,659,535 contigs. The ecological importance of bats such as P. mexicanus, and their diverse ecological roles, underscores the value of having complete genomes in addressing information gaps and facing challenges regarding their function in ecosystems and their conservation.Additional information
supplementary data -
Sander, J., Çetinçelik, M., Zhang, Y., Rowland, C. F., & Harmon, Z. (2024). Why does joint attention predict vocabulary acquisition? The answer depends on what coding scheme you use. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1607-1613).Abstract
Despite decades of study, we still know less than we would like about the association between joint attention (JA) and language acquisition. This is partly because of disagreements on how to operationalise JA. In this study, we examine the impact of applying two different, influential JA operationalisation schemes to the same dataset of child-caregiver interactions, to determine which yields a better fit to children's later vocabulary size. Two coding schemes— one defining JA in terms of gaze overlap and one in terms of social aspects of shared attention—were applied to video-recordings of dyadic naturalistic toy-play interactions (N=45). We found that JA was predictive of later production vocabulary when operationalised as shared focus (study 1), but also that its operationalisation as shared social awareness increased its predictive power (study 2). Our results emphasise the critical role of methodological choices in understanding how and why JA is associated with vocabulary size. -
Scharenborg, O., Wan, V., & Moore, R. K. (2006). Capturing fine-phonetic variation in speech through automatic classification of articulatory features. In Speech Recognition and Intrinsic Variation Workshop [SRIV2006] (pp. 77-82). ISCA Archive.
Abstract
The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we compared support vector machines (SVMs) with multilayer perceptrons (MLPs). MLPs have been widely (and rather successfully) used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performances of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the MLPs outperformed the SVMs, but it is concluded that both classifiers exhibit similar behaviour in terms of patterns of errors. -
Schijven, D., Soheili-Nezhad, S., Fisher, S. E., & Francks, C. (2024). Exome-wide analysis implicates rare protein-altering variants in human handedness. Nature Communications, 15: 2632. doi:10.1038/s41467-024-46277-w.
Abstract
Handedness is a manifestation of brain hemispheric specialization. Left-handedness occurs at increased rates in neurodevelopmental disorders. Genome-wide association studies have identified common genetic effects on handedness or brain asymmetry, which mostly involve variants outside protein-coding regions and may affect gene expression. Implicated genes include several that encode tubulins (microtubule components) or microtubule-associated proteins. Here we examine whether left-handedness is also influenced by rare coding variants (frequencies ≤ 1%), using exome data from 38,043 left-handed and 313,271 right-handed individuals from the UK Biobank. The beta-tubulin gene TUBB4B shows exome-wide significant association, with a rate of rare coding variants 2.7 times higher in left-handers than right-handers. The TUBB4B variants are mostly heterozygous missense changes, but include two frameshifts found only in left-handers. Other TUBB4B variants have been linked to sensorineural and/or ciliopathic disorders, but not the variants found here. Among genes previously implicated in autism or schizophrenia by exome screening, DSCAM and FOXP1 show evidence for rare coding variant association with left-handedness. The exome-wide heritability of left-handedness due to rare coding variants was 0.91%. This study reveals a role for rare, protein-altering variants in left-handedness, providing further evidence for the involvement of microtubules and disorder-relevant genes.Additional information
supplementary information reporting summary peer review file link to preprint -
Schiller, N. O., Schuhmann, T., Neyndorff, A. C., & Jansma, B. M. (2006). The influence of semantic category membership on syntactic decisions: A study using event-related brain potentials. Brain Research, 1082(1), 153-164. doi:10.1016/j.brainres.2006.01.087.
Abstract
An event-related brain potentials (ERP) experiment was carried out to investigate the influence of semantic category membership on syntactic decision-making. Native speakers of German viewed a series of words that were semantically marked or unmarked for gender and made go/no-go decisions about the grammatical gender of those words. The electrophysiological results indicated that participants could make a gender decision earlier when words were semantically gender-marked than when they were semantically gender-unmarked. Our data provide evidence for the influence of semantic category membership on the decision of the syntactic gender of a visually presented German noun. More specifically, our results support models of language comprehension in which semantic information processing of words is initiated prior to syntactic information processing is finalized. -
Schiller, N. O., & Costa, A. (2006). Different selection principles of freestanding and bound morphemes in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1201-1207. doi:10.1037/0278-7393.32.5.1201.
Abstract
Freestanding and bound morphemes differ in many (psycho)linguistic aspects. Some theorists have claimed that the representation and retrieval of freestanding and bound morphemes in the course of language production are governed by similar processing mechanisms. Alternatively, it has been proposed that both types of morphemes may be selected for production in different ways. In this article, the authors first review the available experimental evidence related to this topic and then present new experimental data pointing to the notion that freestanding and bound morphemes are retrieved following distinct processing principles: freestanding morphemes are subject to competition, bound morphemes not. -
Schiller, N. O. (2006). Lexical stress encoding in single word production estimated by event-related brain potentials. Brain Research, 1112(1), 201-212. doi:10.1016/j.brainres.2006.07.027.
Abstract
An event-related brain potentials (ERPs) experiment was carried out to investigate the time course of lexical stress encoding in language production. Native speakers of Dutch viewed a series of pictures corresponding to bisyllabic names which were either stressed on the first or on the second syllable and made go/no-go decisions on the lexical stress location of those picture names. Behavioral results replicated a pattern that was observed earlier, i.e. faster button-press latencies to initial as compared to final stress targets. The electrophysiological results indicated that participants could make a lexical stress decision significantly earlier when picture names had initial than when they had final stress. Moreover, the present data suggest the time course of lexical stress encoding during single word form formation in language production. When word length is corrected for, the temporal interval for lexical stress encoding specified by the current ERP results falls into the time window previously identified for phonological encoding in language production. -
Schiller, N. O., Jansma, B. M., Peters, J., & Levelt, W. J. M. (2006). Monitoring metrical stress in polysyllabic words. Language and Cognitive Processes, 21(1/2/3), 112-140. doi:10.1080/01690960400001861.
Abstract
This study investigated the monitoring of metrical stress information in internally generated speech. In Experiment 1, Dutch participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., KAno ‘‘canoe’’) than for targets with final stress (e.g., kaNON ‘‘cannon’’; capital letters indicate stressed syllables). It was demonstrated that monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with trisyllabic picture names. These results are similar to the findings of Wheeldon and Levelt (1995) in a segment monitoring task. The outcome might be interpreted to demonstrate that phonological encoding in speech production is a rightward incremental process. Alternatively, the data might reflect the sequential nature of a perceptual mechanism used to monitor lexical stress. -
Schiller, N. O., & Caramazza, A. (2006). Grammatical gender selection and the representation of morphemes: The production of Dutch diminutives. Language and Cognitive Processes, 21, 945-973. doi:10.1080/01690960600824344.
Abstract
In this study, we investigated grammatical feature selection during noun phrase production in Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners. Pictures of simple objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a noun phrase with the appropriate gender-marked determiner. Auditory (Experiment 1) or visual cues (Experiment 2) indicated whether the noun was to be produced in its standard or diminutive form. Results revealed a cost in naming latencies when target and distractor take different determiner forms independent of whether or not they have the same gender. This replicates earlier results showing that congruency effects are due to competition during the selection of determiner forms rather than gender features. The overall pattern of results supports the view that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from incongruent grammatical features. Selection of the correct determiner form, however, is a competitive process, implying that lexical node and grammatical feature selection operate with distinct principles. -
Schreiner, M. S., Zettersten, M., Bergmann, C., Frank, M. C., Fritzsche, T., Gonzalez-Gomez, N., Hamlin, K., Kartushina, N., Kellier, D. J., Mani, N., Mayor, J., Saffran, J., Shukla, M., Silverstein, P., Soderstrom, M., & Lippold, M. (2024). Limited evidence of test-retest reliability in infant-directed speech preference in a large pre-registered infant experiment. Developmental Science, 27(6): e13551. doi:10.1111/desc.13551.
Abstract
est-retest reliability—establishing that measurements remain consistent across multiple testing sessions—is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants’ preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants’ speech preference (overall r = 0.09, 95% CI [−0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study’s effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants. -
Scott, S., & Sauter, D. (2006). Non-verbal expressions of emotion - acoustics, valence, and cross cultural factors. In Third International Conference on Speech Prosody 2006. ISCA.
Abstract
This presentation will address aspects of the expression of emotion in non-verbal vocal behaviour, specifically attempting to determine the roles of both positive and negative emotions, their acoustic bases, and the extent to which these are recognized in non-Western cultures. -
Seidl, A., & Johnson, E. K. (2006). Infant word segmentation revisited: Edge alignment facilitates target extraction. Developmental Science, 9(6), 565-573.
Abstract
In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed. -
Seidlmayer, E., Melnychuk, T., Galke, L., Kühnel, L., Tochtermann, K., Schultz, C., & Förstner, K. U. (2024). Research topic displacement and the lack of interdisciplinarity: Lessons from the scientific response to COVID-19. Scientometrics, 129, 5141-5179. doi:10.1007/s11192-024-05132-x.
Abstract
Based on a large-scale computational analysis of scholarly articles, this study investigates the dynamics of interdisciplinary research in the first year of the COVID-19 pandemic. Thereby, the study also analyses the reorientation effects away from other topics that receive less attention due to the high focus on the COVID-19 pandemic. The study aims to examine what can be learned from the (failing) interdisciplinarity of coronavirus research and its displacing effects for managing potential similar crises at the scientific level. To explore our research questions, we run several analyses by using the COVID-19++ dataset, which contains scholarly publications, preprints from the field of life sciences, and their referenced literature including publications from a broad scientific spectrum. Our results show the high impact and topic-wise adoption of research related to the COVID-19 crisis. Based on the similarity analysis of scientific topics, which is grounded on the concept embedding learning in the graph-structured bibliographic data, we measured the degree of interdisciplinarity of COVID-19 research in 2020. Our findings reveal a low degree of research interdisciplinarity. The publications’ reference analysis indicates the major role of clinical medicine, but also the growing importance of psychiatry and social sciences in COVID-19 research. A social network analysis shows that the authors’ high degree of centrality significantly increases her or his degree of interdisciplinarity. -
Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2024). Attention drives visual processing and audiovisual integration during multimodal communication. The Journal of Neuroscience, 44(10): e0870232023. doi:10.1523/JNEUROSCI.0870-23.2023.
Abstract
During communication in real-life settings, our brain often needs to integrate auditory and visual information, and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging (RIFT) and magnetoencephalography (MEG) to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing non-linear signal interactions, was enhanced in left frontotemporal and frontal regions. Focusing on LIFG (Left Inferior Frontal Gyrus), this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.Additional information
link to preprint -
Sekine, K. (2006). Developmental changes in spatial frame of reference among preschoolers: Spontaneous gestures and speech in route descriptions. The Japanese journal of developmental psychology, 17(3), 263-271.
Abstract
This research investigated how spontaneous gestures during speech represent “Frames of Reference” (FoR) among preschool children, and how their FoRs change with age. Four-, five-, and six-year-olds (N=55) described the route from the nursery school to their own homes. Analysis of children’s utterances and gestures showed that mean length of utterance, speech time, and use of landmarks or right/left terms to describe a route, all increased with age. Most of 4-year-olds made gestures in the direction of the actual route to their homes, and their hands tend to be raised above the shoulder. In contrast, 6-year-olds used gestures to give directions that did not match the actual route, as if they were creating a virtual space in front of the speaker. Some 5- and 6-year-olds produced gestures that represented survey mapping. These results indicated that development of FoR in childhood may change from an egocentric FoR to a fixed FoR. As factors underlying development of FoR, verbal encoding skills and the commuting experience were also discussed. -
Sekine, K., & Özyürek, A. (2024). Children benefit from gestures to understand degraded speech but to a lesser extent than adults. Frontiers in Psychology, 14: 1305562. doi:10.3389/fpsyg.2023.1305562.
Abstract
The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children’s multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.Additional information
supplemental material -
Senft, G. (2006). Prolegomena to Kilivila grammar of space. In S. C. Levinson, & D. P. Wilkins (
Eds. ), Grammars of space: Explorations in cognitive diversity (pp. 206-229). Cambridge: Cambridge University Press.Abstract
This paper presents preliminary remarks on some of the central linguistic means speakers of Kilivila use in expressing their conceptions of space and for referring to objects, persons, and events in space . After a brief characterisation of the language and its speakers, I sketch how specific topological relations are encoded, how motion events are described, and what frames of spatial reference are preferred in what contexts for what means and ends. -
Senft, G. (2006). Völkerkunde und Linguistik: Ein Plädoyer für interdisziplinäre Kooperation. Zeitschrift für Germanistische Linguistik, 34, 87-104.
Abstract
Starting with Hockett’s famous statement on the relationship between linguistics and anthropology - "Linguistics without anthropology is sterile; anthropology without linguistics is blind” - this paper first discusses the historic perspective of the topic. This discussion starts with Herder, Humboldt and Schleiermacher and ends with the present debate on the interrelationship of anthropology and linguistics. Then some excellent examples of interdisciplinary projects within anthropological linguistics (or linguistic anthropology) are presented. And finally it is illustrated why Hockett is still right. -
Senft, G. (2006). A biography in the strict sense of the term [Review of the book Malinowski: Odyssee of an anthropologist 1884-1920, vol. 1 by Michael Young]. Journal of Pragmatics, 38(4), 610-637. doi:10.1016/j.pragma.2005.06.012.
-
Senft, G. (2006). [Review of the book Bilder aus der Deutschen Südsee by Hermann Joseph Hiery]. Paideuma: Mitteilungen zur Kulturkunde, 52, 304-308.
-
Senft, G. (2006). [Review of the book Narrative as social practice: Anglo-Western and Australian Aboriginal oral traditions by Danièle M. Klapproth]. Journal of Pragmatics, 38(8), 1326-1331. doi:10.1016/j.pragma.2005.11.001.
-
Senft, G. (2006). [Review of the book Pacific Pidgins and Creoles: Origins, growth and development by Darrell T. Tryon and Jean-Michel Charpentier]. Linguistics, 44(1), 195-200. doi:10.1515/LING.2006.006.
-
Senft, G. (2024). Die IPrA, Helmut und ich. Wiener Linguistische Gazette, 97, 35-49.
Abstract
This contribution describes the beginning and the development of the professional and personal relationship between Helmut and the author which has been highly influenced by our joint membership in the International Pragmatics Association and by our activities in and for the IPrA. -
Serio, B., Hettwer, M. D., Wiersch, L., Bignardi, G., Sacher, J., Weis, S., Eickhoff, S. B., & Valk, S. L. (2024). Sex differences in functional cortical organization reflect differences in network topology rather than cortical morphometry. Nature Communications, 15: 7714. doi:10.1038/s41467-024-51942-1.
Abstract
Differences in brain size between the sexes are consistently reported. However, the consequences of this anatomical difference on sex differences in intrinsic brain function remain unclear. In the current study, we investigate whether sex differences in intrinsic cortical functional organization may be associated with differences in cortical morphometry, namely different measures of brain size, microstructure, and the geodesic distance of connectivity profiles. For this, we compute a low dimensional representation of functional cortical organization, the sensory-association axis, and identify widespread sex differences. Contrary to our expectations, sex differences in functional organization do not appear to be systematically associated with differences in total surface area, microstructural organization, or geodesic distance, despite these morphometric properties being per se associated with functional organization and differing between sexes. Instead, functional sex differences in the sensory-association axis are associated with differences in functional connectivity profiles and network topology. Collectively, our findings suggest that sex differences in functional cortical organization extend beyond sex differences in cortical morphometry.Additional information
41467_2024_51942_MOESM1_ESM.pdf -
Seuren, P. A. M. (2006). Sentence-oriented semantic approaches in generative grammar. In S. Auroux, E. Koerner, H. J. Niederehe, & K. Versteegh (
Eds. ), History of the Language Sciences: An International Handbook on the Evolution of the Study of Language from the Beginnings to the Present (pp. 2201-2213). Berlin: Walter de Gruyter.Abstract
1. Introduction 2. A generative grammar as an algorithm 3. The semantic component 4. Bibliography 1. Introduction Throughout the 20th century up to the present day grammar and semantics have been uneasy bedfellows. A look at the historical background will make it clear how this curious situation came about. 20th-century linguistics has been characterized by an almost exclusive concern with the structure of words, word groups and sentences. This concern was reinforced, especially on the American side of the Atlantic, by the sudden rise and subsequent dominance of behaviorism during the 1920s. It started in psychology but quickly permeated all the human sciences, including linguistics, until the early 1960s, when it collapsed as suddenly as it had arisen. -
Seuren, P. A. M. (2006). Presupposition. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 10) (pp. 80-87). Amsterdam: Elsevier.Abstract
Presupposition is a semantic device built into natural language to make sentences fit for use in certain contexts but not in others. A sentence carrying a presupposition thus evokes a context in which that presupposition is fulfilled. The study of presupposition was triggered by the behavior of natural language negation, which tends to preserve presuppositions either as invited inferences or as entailments. As the role of discourse became more apparent in semantics, presupposition began to be seen increasingly as a discourse-semantic phenomenon with consequences for the logic of language. -
Seuren, P. A. M. (2006). Projection problem. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 10) (pp. 128-131). Amsterdam: Elsevier.Abstract
The property of presuppositions to be sometimes preserved through embeddings, albeit often in a weakened form, is called projection. The projection problem consists in formulating the conditions under which the presuppositions of an embedded clause (a) are kept as presuppositions of the superordinate structure, or (b) remain as an invited inference that can be overruled by context, or (c) are canceled. Over the past 25 years it has been recognized that the projection problem is to be solved in the context of a wider theory of presupposition and discourse incrementation. -
Seuren, P. A. M. (2006). Propositional and predicate logic-linguistic aspects. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 10) (pp. 146-153). Amsterdam: Elsevier.Abstract
Logic was discovered by Aristotle when he saw that the semantic behavior of the negation word not is different in sentences with a definite and in those with a quantified subject term. Until the early 20th century, logic remained firmly language-based, but for the past century it has been mainly a tool in the hands of mathematicians, which has meant an alienation from linguistic reality. With the help of new techniques, it is now possible to revert to the logic of language, which is seen as based on a semantic analysis of the logical words (constants) involved. This new perspective, combined with much improved insights into the semantically defined discourse dependency of natural language sentences, leads to a novel and more functionally oriented approach to logic and to a reappraisal of traditional predicate calculus, whose main fault, undue existential import, evaporates when discourse dependency, in particular the presuppositional aspect, is brought into play. Traditional predicate calculus is seen to have a much greater logical power and a much greater functionality than modern predicate calculus. There is also full isomorphism, neglected in modern logic, between traditional predicate calculus and propositional calculus, which raises the question of any possible deeper causes. -
Seuren, P. A. M. (2006). The natural logic of language and cognition. Pragmatics, 16(1), 103-138.
-
Seuren, P. A. M. (2006). Virtual objects. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 13) (pp. 438-441). Amsterdam: Elsevier.Abstract
Virtual objects are objects thought up by a thinking individual. Although 20th-century philosophy has tried to ban them from ontology, they make it impossible to account for the truth of sentences such as Apollo was worshipped in the island of Delos, in which a property is assigned to the nonexisting, virtual entity Apollo. Such facts are the reason why virtual objects are slowly being recognized again. -
Seuren, P. A. M. (2006). Factivity. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 4) (pp. 423-424). Amsterdam: Elsevier.Abstract
Some predicates are ‘factive’ in that they induce the presupposition that what is said in their subordinate that clause is true. -
Seuren, P. A. M. (2006). Donkey sentences. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 3) (pp. 763-766). Amsterdam: Elsevier.Abstract
The term ‘donkey sentences’ derives from the medieval philosopher Walter Burleigh, whose example sentences contain mention of donkeys. The modern philosopher Peter Geach rediscovered Burleigh's sentences and the associated problem. The problem is that natural language anaphoric pronouns are sometimes used in a way that cannot be accounted for in terms of modern predicate calculus. The solution lies in establishing a separate category of anaphoric pronouns that refer via the intermediary of a contextually given antecedent, possibly an existentially quantified expression. -
Seuren, P. A. M. (2006). Early formalization tendencies in 20th-century American linguistics. In S. Auroux, E. Koerner, H.-J. Niederehe, & K. Versteegh (
Eds. ), History of the Language Sciences: An International Handbook on the Evolution of the Study of Language from the Beginnings to the Present (pp. 2026-2034). Berlin: Walter de Gruyter. -
Seuren, P. A. M. (2006). Discourse domain. In K. Brown (
Ed. ), Encyclopedia of language and lingusitics (vol. 1) (pp. 638-639). Amsterdam: Elsevier.Abstract
A discourse domain D is a form of middle-term memory for the storage of the information embodied in the discourse at hand. The information carried by a new utterance u is added to D (u is incremented to D). The processes involved and the specific structure of D are a matter of ongoing research. -
Seuren, P. A. M. (2006). Discourse semantics. In K. Brown (
Ed. ), Encyclopedia of language and linguistics (vol. 3) (pp. 669-677). Amsterdam: Elsevier.Abstract
Discourse semantics (DSx) is based on the fact that the interpretation of uttered sentences is dependent on and co-determined by the information stored in a specialized middle-term cognitive memory called discourse domain (D). DSx studies the structure and dynamics of Ds and the conditions to be fulfilled by D for proper interpretation. It does so in the light of the truth-conditional criteria for semantics, with an emphasis on intensionality phenomena. It requires the assumption of virtual entities and virtual facts. Any model-theoretic interpretation holds between discourse structures and pre-established verification domains. -
Seuren, P. A. M. (2006). Aristotle and linguistics. In K. Brown (
Ed. ), Encyclopedia of language and lingusitics (vol.1) (pp. 469-471). Amsterdam: Elsevier.Abstract
Aristotle's importance in the professional study of language consists first of all in the fact that he demythologized language and made it an object of rational investigation. In the context of his theory of truth as correspondence, he also provided the first semantic analysis of propositions in that he distinguished two main constituents, the predicate, which expresses a property, and the remainder of the proposition, referring to a substance to which the property is assigned. That assignment is either true or false. Later, the ‘remainder’ was called subject term, and the Aristotelian predicate was identified with the verb in the sentence. The Aristotelian predicate, however, is more like what is now called the ‘comment,’ whereas his remainder corresponds to the topic. Aristotle, furthermore, defined nouns and verbs as word classes. In addition, he introduced the term ‘case’ for paradigmatic morphological variation. -
Seuren, P. A. M. (2006). McCawley’s legacy [Review of the book Polymorphous linguistics: Jim McCawley's legacy ed. by Salikoko S. Mufwene, Elaine J. Francis and Rebecca S. Wheeler]. Language Sciences, 28(5), 521-526. doi:10.1016/j.langsci.2006.02.001.
-
Seuren, P. A. M. (2006). Meaning, the cognitive dependency of lexical meaning. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 7) (pp. 575-577). Amsterdam: Elsevier.Abstract
There is a growing awareness among theoretical linguists and philosophers of language that the linguistic definition of lexical meanings, which must be learned when one learns a language, underdetermines not only full utterance interpretation but also sentence meaning. The missing information must be provided by cognition – that is, either general encyclopedic or specific situational knowledge. This fact crucially shows the basic insufficiency of current standard model-theoretic semantics as a paradigm for the analysis and description of linguistic meaning. -
Seuren, P. A. M. (2006). Lexical conditions. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 7) (pp. 77-79). Amsterdam: Elsevier.Abstract
The lexical conditions, also known as satisfaction conditions, of a predicate P are the conditions that must be satisfied by the term referents of P for P applied to these term referents to yield a true sentence. In view of presupposition theory it makes sense to distinguish two categories of lexical conditions, the preconditions that must be satisfied for the sentence to be usable in any given discourse, and the update conditions which must be satisfied for the sentence to yield truth. -
Seuren, P. A. M. (2006). Multivalued logics. In K. Brown (
Ed. ), Encyclopedia of Language and Linguistics (vol. 8) (pp. 387-390). Amsterdam: Elsevier.Abstract
The widely prevailing view that standard bivalent logic is the only possible sound logical system, imposed by metaphysical necessity, has been shattered by the development of multivalent logics during the 20th century. It is now clear that standard bivalent logic is merely the minimal representative of a wide variety of viable logics with any number of truth values. These viable logics can be subdivided into families. In this article, the Kleene family and the PPCn family are subjected to special examination, as they appear to be most relevant for the study of the logical properties of human language. -
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.
Abstract
Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers. -
Severijnen, G. G. A., Gärtner, V. M., Walther, R. F. E., & McQueen, J. M. (2024). Talker-specific perceptual learning about lexical stress: stability over time. In Y. Chen, A. Chen, & A. Arvaniti (
Eds. ), Proceedings of Speech Prosody 2024 (pp. 657-661). doi:10.21437/SpeechProsody.2024-133.Abstract
Talkers vary in how they speak, resulting in acoustic variability in segments and prosody. Previous studies showed that listeners deal with segmental variability through perceptual learning and that these learning effects are stable over time. The present study examined whether this is also true for lexical stress variability. Listeners heard Dutch minimal pairs (e.g., VOORnaam vs. voorNAAM, ‘first name’ vs. ‘respectable’) spoken by two talkers. Half of the participants heard Talker 1 using only F0 to signal lexical stress and Talker 2 using only intensity. The other half heard the reverse. After a learning phase, participants were tested on words spoken by these talkers with conflicting stress cues (‘mixed items’; e.g., Talker 1 saying voornaam with F0 signaling initial stress and intensity signaling final stress). We found that, despite the conflicting cues, listeners perceived these items following what they had learned. For example, participants hearing the example mixed item described above who had learned that Talker 1 used F0 perceived initial stress (VOORnaam) but those who had learned that Talker 1 used intensity perceived final stress (voorNAAM). Crucially, this result was still present in a delayed test phase, showing that talker-specific learning about lexical stress is stable over time. -
Seyfeddinipur, M. (2006). Disfluency: Interrupting speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59337.
Additional information
full text via Radboud Repository -
Shan, W., Zhang, Y., Zhao, J., Wu, S., Zhao, L., Ip, P., Tucker, J. D., & Jiang, F. (2024). Positive parent–child interactions moderate certain maltreatment effects on psychosocial well-being in 6-year-old children. Pediatric Research, 95, 802-808. doi:10.1038/s41390-023-02842-5.
Abstract
Background: Positive parental interactions may buffer maltreated children from poor psychosocial outcomes. The study aims to evaluate the associations between various types of maltreatment and psychosocial outcomes in early childhood, and examine the moderating effect of positive parent-child interactions on them.
Methods: Data were from a representative Chinese 6-year-old children sample (n = 17,088). Caregivers reported the history of child maltreatment perpetrated by any individuals, completed the Strengths and Difficulties Questionnaire as a proxy for psychosocial well-being, and reported the frequency of their interactions with children by the Chinese Parent-Child Interaction Scale.
Results: Physical abuse, emotional abuse, neglect, and sexual abuse were all associated with higher odds of psychosocial problems (aOR = 1.90 [95% CI: 1.57-2.29], aOR = 1.92 [95% CI: 1.75-2.10], aOR = 1.64 [95% CI: 1.17-2.30], aOR = 2.03 [95% CI: 1.30-3.17]). Positive parent-child interactions were associated with lower odds of psychosocial problems after accounting for different types of maltreatment. The moderating effect of frequent parent-child interactions was found only in the association between occasional only physical abuse and psychosocial outcomes (interaction term: aOR = 0.34, 95% CI: 0.15-0.77).
Conclusions: Maltreatment and positive parent-child interactions have impacts on psychosocial well-being in early childhood. Positive parent-child interactions could only buffer the adverse effect of occasional physical abuse on psychosocial outcomes. More frequent parent-child interactions may be an important intervention opportunity among some children.
Impact: It provides the first data on the prevalence of different single types and combinations of maltreatment in early childhood in Shanghai, China by drawing on a city-level population-representative sample. It adds to evidence that different forms and degrees of maltreatment were all associated with a higher risk of psychosocial problems in early childhood. Among them, sexual abuse posed the highest risk, followed by emotional abuse. It innovatively found that higher frequencies of parent-child interactions may provide buffering effects only to children who are exposed to occasional physical abuse. It provides a potential intervention opportunity, especially for physically abused children. -
Shatzman, K. B. (2006). Sensitivity to detailed acoustic information in word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59331.
Additional information
full text via Radboud Repository -
Shatzman, K. B., & McQueen, J. M. (2006). Segment duration as a cue to word boundaries in spoken-word recognition. Perception & Psychophysics, 68(1), 1-16.
Abstract
In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., a jar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants’ fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone. -
Shatzman, K. B., & McQueen, J. M. (2006). Prosodic knowledge affects the recognition of newly acquired words. Psychological Science, 17(5), 372-377. doi:10.1111/j.1467-9280.2006.01714.x.
Abstract
An eye-tracking study examined the involvement of prosodic knowledge—specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words—in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners’ fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words. -
Shatzman, K. B., & McQueen, J. M. (2006). The modulation of lexical competition by segment duration. Psychonomic Bulletin & Review, 13(6), 966-971.
Abstract
In an eye-tracking study, we examined how fine-grained phonetic detail, such as segment duration, influences the lexical competition process during spoken word recognition. Dutch listeners’ eye movements to pictures of four objects were monitored as they heard sentences in which a stop-initial target word (e.g., pijp “pipe”) was preceded by an [s]. The participants made more fixations to pictures of cluster-initial words (e.g., spijker “nail”) when they heard a long [s] (mean duration, 103 msec) than when they heard a short [s] (mean duration, 73 msec). Conversely, the participants made more fixations to pictures of the stop-initial words when they heard a short [s] than when they heard a long [s]. Lexical competition between stop- and cluster-initial words, therefore, is modulated by segment duration differences of only 30 msec. -
Shi, R., Werker, J. F., & Cutler, A. (2006). Recognition and representation of function words in English-learning infants. Infancy, 10(2), 187-198. doi:10.1207/s15327078in1002_5.
Abstract
We examined infants' recognition of functors and the accuracy of the representations that infants construct of the perceived word forms. Auditory stimuli were “Functor + Content Word” versus “Nonsense Functor + Content Word” sequences. Eight-, 11-, and 13-month-old infants heard both real functors and matched nonsense functors (prosodically analogous to their real counterparts but containing a segmental change). Results reveal that 13-month-olds recognized functors with attention to segmental detail. Eight-month-olds did not distinguish real versus nonsense functors. The performance of 11-month-olds fell in between that of the older and younger groups, consistent with an emerging recognition of real functors. The three age groups exhibited a clear developmental trend. We propose that in the earliest stages of vocabulary acquisition, function elements receive no segmentally detailed representations, but such representations are gradually constructed so that once vocabulary growth starts in earnest, fully specified functor representations are in place to support it. -
Shi, R., Cutler, A., Werker, J., & Cruickshank, M. (2006). Frequency and form as determinants of functor sensitivity in English-acquiring infants. Journal of the Acoustical Society of America, 119(6), EL61-EL67. doi:10.1121/1.2198947.
Abstract
High-frequency functors are arguably among the earliest perceived word forms and may assist extraction of initial vocabulary items. Canadian 11- and 8-month-olds were familiarized to pseudo-nouns following either a high-frequency functor the or a low-frequency functor her versus phonetically similar mispronunciations of each, kuh and ler, and then tested for recognition of the pseudo-nouns. A preceding the (but not kuh, her, ler)facilitated extraction of the pseudo-nouns for 11-month-olds; the is thus well-specified in form for these infants. However, both the and kuh (but not her-ler )f aciliated segmentation or 8-month-olds, suggesting an initial underspecified representation of high-frequency functors. -
Silva-Nasser, C. G. A. d. (2024). An analysis on the notion of perspective of the conceptual metaphor MULHER É PIRANHA in song lyrics. Antares Letras e Humanidades, 16(37).
Abstract
This article aims to discuss the specific metaphor MULHER É PIRANHA, which is part of the general metaphor SER HUMANO É ANIMAL, besides the perspective role in its usage in song lyrics in Brazilian Portuguese, under the Conceptual Metaphor Theory. For this purpose, we discuss about metaphor and perspective according to Lakoff and Johnson (1980), followed by the cultural model The Great Chain of Being (LAKOFF; TURNER, 1989). We discuss about animal metaphors and its use, marking the notion of perspective, specially in the treatment given to animal metaphors related to women. We then do a metaphoric mapping and illustrate the points in the mapping with popular song lyrics. When the metaphor "piranha" is used to describe a man, it brings positive features, and we discuss about the cognitive reasons for so in the CMT. We argue that the conceptual metaphor MULHER É PIRANHA is modulated by perspective. -
Silverstein, P., Bergmann, C., & Syed, M. (
Eds. ). (2024). Open science and metascience in developmental psychology [Special Issue]. Infant and Child Development, 33(1). -
Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
-
Skiba, R. (2006). Computeranalyse/Computer Analysis. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgill (
Eds. ), Sociolinguistics: An international handbook of the science of language and society [2nd completely revised and extended edition] (pp. 1187-1197). Berlin, New York: de Gruyter. -
Slaats, S. (2024). On the interplay between lexical probability and syntactic structure in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.
Abstract
When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.Additional information
supplementary data -
Slim, M. S., Kandel, M., Yacovone, A., & Snedeker, J. (2024). Webcams as windows to the mind?: Adirect comparison between in-lab and web-based eye-tracking methods. Open Mind: Discoveries in Cognitive Science, 8, 1369-1424. doi:10.1162/opmi_a_00171.
Abstract
There is a growing interest in the use of webcams to conduct eye-tracking experiments over the internet. We assessed the performance of two webcam-based eye-tracking techniques for behavioral research: manual annotation of webcam videos (manual eye-tracking) and the
automated WebGazer eye-tracking algorithm. We compared these methods to a traditional
infrared eye-tracker and assessed their performance in both lab and web-based settings. In
both lab and web experiments, participants completed the same battery of five tasks, selected
to trigger effects of various sizes: two visual fixation tasks and three visual world tasks testing
real-time (psycholinguistic) processing effects. In the lab experiment, we simultaneously collected infrared eye-tracking, manual eye-tracking, and WebGazer data; in the web experiment, we simultaneously collected manual eye-tracking and WebGazer data. We found that the two webcam-based methods are suited to capture different types of eye-movement patterns. Manual eye-tracking, similar to infrared eye-tracking, detected both large and small effects. WebGazer, however, showed less accuracy in detecting short, subtle effects. There was no notable effect of setting for either method. We discuss the trade-offs researchers face when choosing eye-tracking methods and offer advice for conducting eye-tracking experiments over the internet.Additional information
Data, analysis code, and Supplementary Materials -
Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics, 27(1), 116-124. doi:10.1075/sll.00084.slo.
-
Smits, R., Sereno, J., & Jongman, A. (2006). Categorization of sounds. Journal of Experimental Psychology: Human Perception and Performance, 32(3), 733-754. doi:10.1037/0096-1523.32.3.733.
Abstract
The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions. -
Soderstrom, M., Rocha-Hidalgo, J., Munoz, L. E., Bochynska, A., Werker, J. F., Skarabela, B., Seidl, A., Ryjova, Y., Rennels, J. L., Potter, C. E., Paulus, M., Ota, M., Olesen, N. M., Nave, K. M., Mayor, J., Martin, A., Machon, L. C., Lew-Williams, C., Ko, E.-S., Kim, H. Soderstrom, M., Rocha-Hidalgo, J., Munoz, L. E., Bochynska, A., Werker, J. F., Skarabela, B., Seidl, A., Ryjova, Y., Rennels, J. L., Potter, C. E., Paulus, M., Ota, M., Olesen, N. M., Nave, K. M., Mayor, J., Martin, A., Machon, L. C., Lew-Williams, C., Ko, E.-S., Kim, H., Kartushina, N., Kammermeier, M., Jessop, A., Hay, J. F., Hannon, E. E., Hamlin, J. K., Havron, N., Gonzalez-Gomez, N., Gampe, A., Fritzsche, T., Frank, M. C., Durrant, S., Davies, C., Cashon, C., Byers-Heinlein, K., Black, A. K., Bergmann, C., Anderson, L., Alshakhori, M. K., Al-Hoorie, A. H., & Tsui, A. S. M. (2024). Testing the relationship between preferences for infant-directed speech and vocabulary development: A multi-lab study. Journal of Child Language. Advance online publication. doi:10.1017/S0305000924000254.
Abstract
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.Additional information
supplementary material -
Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.
Abstract
Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging. -
Soheili-Nezhad, S., Schijven, D., Mars, R. B., Fisher, S. E., & Francks, C. (2024). Distinct impact modes of polygenic disposition to dyslexia in the adult brain. Science Advances, 10(51): eadq2754. doi:10.1126/sciadv.adq2754.
Abstract
Dyslexia is a common condition that impacts reading ability. Identifying affected brain networks has been hampered by limited sample sizes of imaging case-control studies. We focused instead on brain structural correlates of genetic disposition to dyslexia in large-scale population data. In over 30,000 adults (UK Biobank), higher polygenic disposition to dyslexia was associated with lower head and brain size, and especially reduced volume and/or altered fiber density in networks involved in motor control, language and vision. However, individual genetic variants disposing to dyslexia often had quite distinct patterns of association with brain structural features. Independent component analysis applied to brain-wide association maps for thousands of dyslexia-disposing genetic variants revealed multiple impact modes on the brain, that corresponded to anatomically distinct areas with their own genomic profiles of association. Polygenic scores for dyslexia-related cognitive and educational measures, as well as attention-deficit/hyperactivity disorder, showed similarities to dyslexia polygenic disposition in terms of brain-wide associations, with microstructure of the internal capsule consistently implicated. In contrast, lower volume of the primary motor cortex was only associated with higher dyslexia polygenic disposition among all traits. These findings robustly reveal heterogeneous neurobiological aspects of dyslexia genetic disposition, and whether they are shared or unique with respect to other genetically correlated traits.Additional information
link to preprint -
Sommers, R. P. (2024). Neurobiology of reference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Sprenger, S. A., Levelt, W. J. M., & Kempen, G. (2006). Lexical access during the production of idiomatic phrases. Journal of Memory and Language, 54(2), 161-184. doi:10.1016/j.jml.2005.11.001.
Abstract
In three experiments we test the assumption that idioms have their own lexical entry, which is linked to its constituent lemmas (Cutting & Bock, 1997). Speakers produced idioms or literal phrases (Experiment 1), completed idioms (Experiment 2), or switched between idiom completion and naming (Experiment 3). The results of Experiment 1 show that identity priming speeds up idiom production more effectively than literal phrase production, indicating a hybrid representation of idioms. In Experiment 2, we find effects of both phonological and semantic priming. Thus, elements of an idiom can not only be primed via their wordform, but also via the conceptual level. The results of Experiment 3 show that preparing the last word of an idiom primes naming of both phonologically and semantically related targets, indicating that literal word meanings become active during idiom production. The results are discussed within the framework of the hybrid model of idiom representation. -
Stärk, K. (2024). The company language keeps: How distributional cues influence statistical learning for language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Stehouwer, H. (2006). Cue phrase selection methods for textual classification problems. Master Thesis, Twente University, Enschede.
Abstract
The classification of texts and pieces of texts uses the occurrence of, combinations of, words as an important indicator. Not every word or each combination of words gives a clear indication of the classification of a piece of text. Research has been done on methods that select some words or combinations of words that are more indicative of the type of a piece of text. These words or combinations of words are selected from the words and word-groups as they occur in the texts. These more indicative words or combinations of words we call ¿cue-phrases¿. The goal of these methods is to select the most indicative cue-phrases first. The collection of selected words and/or combinations thereof can then be used for training the classification system. To test these selection methods, a number of experiments has been done on a corpus containing cookbook recipes and on a corpus of four-participant meetings. To perform these experiments, a computer program was written. On the recipe corpus we looked at classifying the sentences into different types. Some examples of these types include ¿requirement¿ and ¿instruction¿. On the four-person meeting corpus we tried to learn, using only lexical features, whether a sentence is addressed to an individual or a group. The experiments on the recipe corpus produced good results that showed that, a number of, the used cue-phrase selection methods are suitable for feature selection. The experiments on the four-person meeting corpus where less successful in terms of performance off the classification task. We did see comparable patterns in selection methods, and considering the results of Jovanovic we can conclude that different features are needed for this particular classification task. One of the original goals was to look at ¿addressee¿ in discussions. Are sentences more often addressed to individuals inside discussions compared to outside discussions? However, in order to be able to accomplish this, we must first identify the segments of the text that are discussions. It proved hard to come to a reliable specification of discussions, and our initial definition wasn¿t sufficient. -
Stivers, T. (2006). Treatment decisions: negotiations between doctors and parents in acute care encounters. In J. Heritage, & D. W. Maynard (
Eds. ), Communication in medical care: Interaction between primary care physicians and patients (pp. 279-312). Cambridge: Cambridge University Press. -
Stivers, T., & Robinson, J. D. (2006). A preference for progressivity in interaction. Language in Society, 35(3), 367-392. doi:10.1017/S0047404506060179.
Abstract
This article investigates two types of preference organization in interaction: in response to a question that selects a next speaker in multi-party interaction, the preference for answers over non-answer responses as a category of a response; and the preference for selected next speakers to respond. It is asserted that the turn allocation rule specified by Sacks, Schegloff & Jefferson (1974) which states that a response is relevant by the selected next speaker at the transition relevance place is affected by these two preferences once beyond a normal transition space. It is argued that a “second-order” organization is present such that interactants prioritize a preference for answers over a preference for a response by the selected next speaker. This analysis reveals an observable preference for progressivity in interaction. -
Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.
Abstract
Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning. -
Takashima, A., Petersson, K. M., Rutters, F., Tendolkar, I., Jensen, O., Zwarts, M. J., McNaughton, B. L., & Fernández, G. (2006). Declarative memory consolidation in humans: A prospective functional magnetic resonance imaging study. Proceedings of the National Academy of Sciences of the United States of America [PNAS], 103(3), 756-761.
Abstract
Retrieval of recently acquired declarative memories depends on
the hippocampus, but with time, retrieval is increasingly sustainable
by neocortical representations alone. This process has been
conceptualized as system-level consolidation. Using functional
magnetic resonance imaging, we assessed over the course of three
months how consolidation affects the neural correlates of memory
retrieval. The duration of slow-wave sleep during a nap/rest
period after the initial study session and before the first scan
session on day 1 correlated positively with recognition memory
performance for items studied before the nap and negatively with
hippocampal activity associated with correct confident recognition.
Over the course of the entire study, hippocampal activity for
correct confident recognition continued to decrease, whereas activity
in a ventral medial prefrontal region increased. These findings,
together with data obtained in rodents, may prompt a
revision of classical consolidation theory, incorporating a transfer
of putative linking nodes from hippocampal to prelimbic prefrontal
areas. -
Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.
Abstract
When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named. -
Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.
Abstract
This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.Additional information
appendix 1-3 -
Tarakçı, B., Barış, C., & Ünal, E. (2024). Boundednes is represented in visual and auditory event cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2612-2618).Abstract
Viewers are sensitive to the distinction between visual events with an internal structure leading to a well-defined endpoint (bounded events) and events lacking this structure and a well-defined endpoint (unbounded events). Here, we asked whether boundedness could be represented in the auditory modality in a way similar to the visual modality. To investigate this question, we trained participants with visual and auditory events on bounded or unbounded event categories in a category identification task. Later, we tested whether they could abstract the internal temporal structure of events and extend the (un)boundedness category to new examples in the same modality. These findings suggest that the principles and constraints that apply to the basic units of human experience in the visual modality have their counterparts in the auditory modality.Additional information
https://escholarship.org/uc/item/15x9f213 -
Ten Bosch, L., Baayen, R. H., & Ernestus, M. (2006). On speech variation and word type differentiation by articulatory feature representations. In Proceedings of Interspeech 2006 (pp. 2230-2233).
Abstract
This paper describes ongoing research aiming at the description of variation in speech as represented by asynchronous articulatory features. We will first illustrate how distances in the articulatory feature space can be used for event detection along speech trajectories in this space. The temporal structure imposed by the cosine distance in articulatory feature space coincides to a large extent with the manual segmentation on phone level. The analysis also indicates that the articulatory feature representation provides better such alignments than the MFCC representation does. Secondly, we will present first results that indicate that articulatory features can be used to probe for acoustic differences in the onsets of Dutch singulars and plurals. -
ten Bosch, L., Hämäläinen, A., Scharenborg, O., & Boves, L. (2006). Acoustic scores and symbolic mismatch penalties in phone lattices. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP 2006]. IEEE.
Abstract
This paper builds on previous work that aims at unraveling the structure of the speech signal by means of using probabilistic representations. The context of this work is a multi-pass speech recognition system in which a phone lattice is created and used as a basis for a lexical search in which symbolic mismatches are allowed at certain costs. The focus is on the optimization of the costs of phone insertions, deletions and substitutions that are used in the lexical decoding pass. Two optimization approaches are presented, one related to a multi-pass computational model for human speech recognition, the other based on a decoding in which Bayes’ risks are minimized. In the final section, the advantages of these optimization methods are discussed and compared. -
Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.
Abstract
From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation. -
Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.
Abstract
Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations. -
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation. -
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.
Abstract
Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation. -
Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction. -
Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.
Abstract
Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity. -
Terrill, A., & Dunn, M. (2006). Semantic transference: Two preliminary case studies from the Solomon Islands. In C. Lefebvre, L. White, & C. Jourdan (
Eds. ), L2 acquisition and Creole genesis: Dialogues (pp. 67-85). Amsterdam: Benjamins. -
Terrill, A. (2006). Central Solomon languages. In K. Brown (
Ed. ), Encyclopedia of language and linguistics (vol. 2) (pp. 279-280). Amsterdam: Elsevier.Abstract
The Papuan languages of the central Solomon Islands are a negatively defined areal grouping: They are those four or possibly five languages in the central Solomon Islands that do not belong to the Austronesian family. Bilua (Vella Lavella), Touo (Rendova), Lavukaleve (Russell Islands), Savosavo (Savo Island) and possibly Kazukuru (New Georgia) have been identified as non-Austronesian since the early 20th century. However, their affiliations both to each other and to other languages still remain a mystery. Heterogeneous and until recently largely undescribed, they present an interesting departure from what is known both of Austronesian languages in the region and of the Papuan languages of the mainland of New Guinea. -
Terrill, A. (2006). Body part terms in Lavukaleve, a Papuan language of the Solomon Islands. Language Sciences, 28(2-3), 304-322. doi:10.1016/j.langsci.2005.11.008.
Abstract
This paper explores body part terms in Lavukaleve, a Papuan isolate spoken in the Solomon Islands. The full set of body part terms collected so far is presented, and their grammatical properties are explained. It is argued that Lavukaleve body part terms do not enter into partonomic relations with each other, and that a hierarchical structure of body part terms does not apply for Lavukaleve. It is shown too that some universal claims which have been made about the expression of terms relating to limbs are contradicted in Lavukaleve, which has only one general term covering arm, hand, leg and (for some people) foot. -
Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2006). Note of clarification on the coding of light verbs in ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language 31, 61–99). Journal of Child Language, 33(1), 191-197. doi:10.1017/S0305000905007178.
Abstract
In our recent paper, ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language31, 61–99), we presented data from two-year-old children to examine the question of whether the semantic generality of verbs contributed to their ease and stage of acquisition over and above the effects of their typically high frequency in the language to which children are exposed. We adopted two different categorization schemes to determine whether individual verbs should be considered to be semantically general, or ‘light’, or whether they encoded more specific semantics. These categorization schemes were based on previous work in the literature on the role of semantically general verbs in early verb acquisition, and were designed, in the first case, to be a conservative estimate of semantic generality, including only verbs designated as semantically general by a number of other researchers (e.g. Clark, 1978; Pinker, 1989; Goldberg, 1998), and, in the second case, to be a more inclusive estimate of semantic generality based on Ninio's (1999a,b) suggestion that grammaticalizing verbs encode the semantics associated with semantically general verbs. Under this categorization scheme, a much larger number of verbs were included as semantically general verbs. -
Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.
Abstract
Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.Additional information
supplementary information -
Tınaz, B., & Ünal, E. (2024). Event segmentation in language and cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 184-191).Abstract
We examine the relation between event segmentation in language and cognition in the domain of motion events, focusing on Turkish, a verb-framed language that segments motion paths in separate linguistic units (verb clauses). We compare motion events that have a path change to those that did not have a path change. In the linguistic task, participants were more likely to use multiple verb phrases when describing events that had a path change compared to those that did not have a path change. In the non-linguistic Dwell Time task, participants viewed self-paced slideshows of still images sampled from the motion event videos in the linguistic task. Dwell times for slides corresponding to path changes were not significantly longer than those for temporally similar slides in the events without a path change. These findings suggest that event units in language may not have strong and stable influences on event segmentation in cognition.Additional information
https://escholarship.org/uc/item/6nm5b85t -
Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.
Abstract
Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production. -
Titus, A., & Peeters, D. (2024). Multilingualism at the market: A pre-registered immersive virtual reality study of bilingual language switching. Journal of Cognition, 7(1), 24-35. doi:10.5334/joc.359.
Abstract
Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals’ capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory. -
Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (
Ed. ), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press. -
Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.
Abstract
Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.Additional information
41598_2024_52589_MOESM1_ESM.docx
Share this page