Displaying 501 - 600 of 792
-
Norcliffe, E., Harris, A., & Jaeger, T. F. (2015). Cross-linguistic psycholinguistics and its critical role in theory development: early beginnings and recent advances. Language, Cognition and Neuroscience, 30(9), 1009-1032. doi:10.1080/23273798.2015.1080373.
Abstract
Recent years have seen a small but growing body of psycholinguistic research focused on typologically diverse languages. This represents an important development for the field, where theorising is still largely guided by the often implicit assumption of universality. This paper introduces a special issue of Language, Cognition and Neuroscience devoted to the topic of cross-linguistic and field-based approaches to the study of psycholinguistics. The papers in this issue draw on data from a variety of genetically and areally divergent languages, to address questions in the production and comprehension of phonology, morphology, words, and sentences. To contextualise these studies, we provide an overview of the field of cross-linguistic psycholinguistics, from its early beginnings to the present day, highlighting instances where cross-linguistic data have significantly contributed to psycholinguistic theorising. -
Norcliffe, E., Konopka, A. E., Brown, P., & Levinson, S. C. (2015). Word order affects the time course of sentence formulation in Tzeltal. Language, Cognition and Neuroscience, 30(9), 1187-1208. doi:10.1080/23273798.2015.1006238.
Abstract
The scope of planning during sentence formulation is known to be flexible, as it can be influenced by speakers' communicative goals and language production pressures (among other factors). Two eye-tracked picture description experiments tested whether the time course of formulation is also modulated by grammatical structure and thus whether differences in linear word order across languages affect the breadth and order of conceptual and linguistic encoding operations. Native speakers of Tzeltal [a primarily verb–object–subject (VOS) language] and Dutch [a subject–verb–object (SVO) language] described pictures of transitive events. Analyses compared speakers' choice of sentence structure across events with more accessible and less accessible characters as well as the time course of formulation for sentences with different word orders. Character accessibility influenced subject selection in both languages in subject-initial and subject-final sentences, ruling against a radically incremental formulation process. In Tzeltal, subject-initial word orders were preferred over verb-initial orders when event characters had matching animacy features, suggesting a possible role for similarity-based interference in influencing word order choice. Time course analyses revealed a strong effect of sentence structure on formulation: In subject-initial sentences, in both Tzeltal and Dutch, event characters were largely fixated sequentially, while in verb-initial sentences in Tzeltal, relational information received priority over encoding of either character during the earliest stages of formulation. The results show a tight parallelism between grammatical structure and the order of encoding operations carried out during sentence formulation. -
Norcliffe, E., & Konopka, A. E. (2015). Vision and language in cross-linguistic research on sentence production. In R. K. Mishra, N. Srinivasan, & F. Huettig (
Eds. ), Attention and vision in language processing (pp. 77-96). New York: Springer. doi:10.1007/978-81-322-2443-3_5.Abstract
To what extent are the planning processes involved in producing sentences fine-tuned to grammatical properties of specific languages? In this chapter we survey the small body of cross-linguistic research that bears on this question, focusing in particular on recent evidence from eye-tracking studies. Because eye-tracking methods provide a very fine-grained temporal measure of how conceptual and linguistic planning unfold in real time, they serve as an important complement to standard psycholinguistic methods. Moreover, the advent of portable eye-trackers in recent years has, for the first time, allowed eye-tracking techniques to be used with language populations that are located far away from university laboratories. This has created the exciting opportunity to extend the typological base of vision-based psycholinguistic research and address key questions in language production with new language comparisons. -
Ohlerth, A.-K., Valentin, A., Vergani, F., Ashkan, K., & Bastiaanse, R. (2020). The verb and noun test for peri-operative testing (VAN-POP): Standardized language tests for navigated transcranial magnetic stimulation and direct electrical stimulation. Acta Neurochirurgica, (2), 397-406. doi:10.1007/s00701-019-04159-x.
Abstract
Background
Protocols for intraoperative language mapping with direct electrical stimulation (DES) often include various language tasks triggering both nouns and verbs in sentences. Such protocols are not readily available for navigated transcranial magnetic stimulation (nTMS), where only single word object naming is generally used. Here, we present the development, norming, and standardization of the verb and noun test for peri-operative testing (VAN-POP) that measures language skills more extensively.
Methods
The VAN-POP tests noun and verb retrieval in sentence context. Items are marked and balanced for several linguistic factors known to influence word retrieval. The VAN-POP was administered in English, German, and Dutch under conditions that are used for nTMS and DES paradigms. For each language, 30 speakers were tested.
Results
At least 50 items per task per language were named fluently and reached a high naming agreement.
Conclusion
The protocol proved to be suitable for pre- and intraoperative language mapping with nTMS and DES. -
Orfanidou, E., McQueen, J. M., Adam, R., & Morgan, G. (2015). Segmentation of British Sign Language (BSL): Mind the gap! Quarterly journal of experimental psychology, 68, 641-663. doi:10.1080/17470218.2014.945467.
Abstract
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms -
Ortega, G., Ozyurek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403-415. doi:10.1037/xlm0000729.
Abstract
When learning a second spoken language, cognates, words overlapping in form and meaning with one’s native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience -
Ortega, G., & Morgan, G. (2015). Input processing at first exposure to a sign language. Second language research, 19(10), 443-463. doi:10.1177/0267658315576822.
Abstract
There is growing interest in learners’ cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back on their L1 to process novel signs because the modality differences between speech (aural–oral) and sign (visual-manual) do not allow for direct cross-linguistic influence. Sign language learners might use alternative strategies to process input expressed in the manual channel. Learners may rely on iconicity, the direct relationship between a sign and its referent. Evidence up to now has shown that iconicity facilitates learning in non-signers, but it is unclear whether it also facilitates sign production. In order to fill this gap, the present study investigated how iconicity influenced articulation of the phonological components of signs. In Study 1, hearing non-signers viewed a set of iconic and arbitrary signs along with their English translations and repeated the signs as accurately as possible immediately after. The results show that participants imitated iconic signs significantly less accurately than arbitrary signs. In Study 2, a second group of hearing non-signers imitated the same set of signs but without the accompanying English translations. The same lower accuracy for iconic signs was observed. We argue that learners rely on iconicity to process manual input because it brings familiarity to the target (sign) language. However, this reliance comes at a cost as it leads to a more superficial processing of the signs’ full phonetic form. The present findings add to our understanding of learners’ cognitive capacities at first exposure to a signed L2, and raises new theoretical questions in the field of second language acquisition -
Ortega, G., & Morgan, G. (2015). Phonological development in hearing learners of a sign language: The role of sign complexity and iconicity. Language Learning, 65(3), 660-668. doi:10.1111/lang.12123.
Abstract
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that some sign components are produced more accurately than others: Handshape was the most difficult, followed by movement, then orientation, and finally location. Iconic signs were articulated less accurately than arbitrary signs because the direct sign-referent mappings and perhaps their similarity with iconic co-speech gestures prevented learners from focusing on the exact phonological structure of the sign. This study shows that multiple phonological features pose greater demand on the production of the parameters of signs and that iconicity interferes in the exact articulation of their constituents -
Ortega, G., & Morgan, G. (2015). The effect of sign iconicity in the mental lexicon of hearing non-signers and proficient signers: Evidence of cross-modal priming. Language, Cognition and Neuroscience, 30(5), 574-585. doi:10.1080/23273798.2014.959533.
Abstract
The present study investigated the priming effect of iconic signs in the mental lexicon of hearing adults. Non-signers and proficient British Sign Language (BSL) users took part in a cross-modal lexical decision task. The results indicate that iconic signs activated semantically related words in non-signers' lexicon. Activation occurred regardless of the type of referent because signs depicting actions and perceptual features of an object yielded the same response times. The pattern of activation was different in proficient signers because only action signs led to cross-modal activation. We suggest that non-signers process iconicity in signs in the same way as they do gestures, but after acquiring a sign language, there is a shift in the mechanisms used to process iconic manual structures -
Ortega, G., & Ozyurek, A. (2020). Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behavior Research Methods, 52, 51-67. doi:10.3758/s13428-019-01204-6.
Abstract
An unprecedented number of empirical studies have shown that iconic gestures—those that mimic the sensorimotor attributes of a referent—contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture–meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture’s mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors. -
Ortega, G., & Ozyurek, A. (2020). Types of iconicity and combinatorial strategies distinguish semantic categories in silent gesture. Language and Cognition, 12(1), 84-113. doi:10.1017/langcog.2019.28.
Abstract
In this study we explore whether different types of iconic gestures
(i.e., acting, drawing, representing) and their combinations are used
systematically to distinguish between different semantic categories in
production and comprehension. In Study 1, we elicited silent gestures
from Mexican and Dutch participants to represent concepts from three
semantic categories: actions, manipulable objects, and non-manipulable
objects. Both groups favoured the acting strategy to represent actions and
manipulable objects; while non-manipulable objects were represented
through the drawing strategy. Actions elicited primarily single gestures
whereas objects elicited combinations of different types of iconic gestures
as well as pointing. In Study 2, a different group of participants were
shown gestures from Study 1 and were asked to guess their meaning.
Single-gesture depictions for actions were more accurately guessed than
for objects. Objects represented through two-gesture combinations (e.g.,
acting + drawing) were more accurately guessed than objects represented
with a single gesture. We suggest iconicity is exploited to make direct
links with a referent, but when it lends itself to ambiguity, individuals
resort to combinatorial structures to clarify the intended referent.
Iconicity and the need to communicate a clear signal shape the structure
of silent gestures and this in turn supports comprehension. -
Ozyurek, A. (2020). From hands to brains: How does human body talk, think and interact in face-to-face language use? In K. Truong, D. Heylen, & M. Czerwinski (
Eds. ), ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 1-2). New York, NY, USA: Association for Computing Machinery. doi:10.1145/3382507.3419442. -
Ozyurek, A., Furman, R., & Goldin-Meadow, S. (2015). On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 42, 64-94. doi:10.1017/S0305000913000512.
Abstract
Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages. -
Paplu, S. H., Mishra, C., & Berns, K. (2020). Pseudo-randomization in automating robot behaviour during human-robot interaction. In 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 1-6). Institute of Electrical and Electronics Engineers. doi:10.1109/ICDL-EpiRob48136.2020.9278115.
Abstract
Automating robot behavior in a specific situation is an active area of research. There are several approaches available in the literature of robotics to cater for the automatic behavior of a robot. However, when it comes to humanoids or human-robot interaction in general, the area has been less explored. In this paper, a pseudo-randomization approach has been introduced to automatize the gestures and facial expressions of an interactive humanoid robot called ROBIN based on its mental state. A significant number of gestures and facial expressions have been implemented to allow the robot more options to perform a relevant action or reaction based on visual stimuli. There is a display of noticeable differences in the behaviour of the robot for the same stimuli perceived from an interaction partner. This slight autonomous behavioural change in the robot clearly shows a notion of automation in behaviour. The results from experimental scenarios and human-centered evaluation of the system help validate the approach.Files private
Request files -
Peeters, D. (2020). Bilingual switching between languages and listeners: Insights from immersive virtual reality. Cognition, 195: 104107. doi:10.1016/j.cognition.2019.104107.
Abstract
Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.Additional information
Supplementary data -
Peeters, D. (2015). A social and neurobiological approach to pointing in speech and gesture. PhD Thesis, Radboud University, Nijmegen.
Additional information
full text via Radboud Repository -
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.
Abstract
In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction. -
Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.
Abstract
A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity. -
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (
Eds. ), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.Abstract
Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
that subserve the integration of pointing gestures and speech in visual contexts in comprehension
are unclear. Here we present the results of an fMRI study in which participants watched images of an
actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
semantically integrates information across modalities and semiotic domains. -
Perlman, M., Paul, J., & Lupyan, G. (2015). Congenitally deaf children generate iconic vocalizations to communicate magnitude. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (
Eds. ), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.Abstract
From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size “frequency code”, but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence. -
Perlman, M., Clark, N., & Falck, M. J. (2015). Iconic prosody in story reading. Cognitive Science, 39(6), 1348-1368. doi:10.1111/cogs.12190.
Abstract
Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that contrasted along concrete and abstract semantic dimensions of speed (e.g., a fast drive, slow career progress) and size (e.g., a small grasshopper, an important contract). Participants read fast stories at a faster rate than slow stories, and big stories with a lower pitch than small stories. The effect of speed was distributed across the stories, including portions that were identical across stories, whereas the size effect was localized to size-related words. Overall, these findings enrich the documentation of iconicity in spoken language and bear on our understanding of the relationship between gesture and speech. -
Perlman, M., Dale, R., & Lupyan, G. (2015). Iconicity can ground the creation of vocal symbols. Royal Society Open Science, 2: 150152. doi:10.1098/rsos.150152.
Abstract
Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative ‘vocal’ charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic ‘meaning template’ we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems. -
Perlman, M., & Clark, N. (2015). Learned vocal and breathing behavior in an enculturated gorilla. Animal Cognition, 18(5), 1165-1179. doi:10.1007/s10071-015-0889-6.
Abstract
We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes. -
Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2015). Does space structure spatial language? A comparison of spatial expression across sign languages. Language, 91(3), 611-641.
Abstract
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language -
Perniss, P. M., Ozyurek, A., & Morgan, G. (2015). The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. doi:10.1111/tops.12127.
Abstract
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. -
Perniss, P. M., Ozyurek, A., & Morgan, G. (
Eds. ). (2015). The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture [Special Issue]. Topics in Cognitive Science, 7(1). doi:10.1111/tops.12113. -
Perniss, P. M., & Ozyurek, A. (2015). Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science, 7(1), 36-60. doi:10.1111/tops.12122.
Abstract
Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture -
Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition. PLoS One, 10(9): e0137147. doi:10.1371/journal.pone.0137147.
Abstract
Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.Additional information
1536057.zip -
Perry, L., Perlman, M., & Lupyan, G. (2015). Iconicity in English vocabulary and its relation to toddlers’ word learning. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (
Eds. ), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.Abstract
Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought. -
Persson, J., Szalisznyó, K., Antoni, G., Wall, A., Fällmar, D., Zora, H., & Bodén, R. (2020). Phosphodiesterase 10A levels are related to striatal function in schizophrenia: a combined positron emission tomography and functional magnetic resonance imaging study. European Archives of Psychiatry and Clinical Neuroscience, 270(4), 451-459. doi:10.1007/s00406-019-01021-0.
Abstract
Pharmacological inhibition of phosphodiesterase 10A (PDE10A) is being investigated as a treatment option in schizophrenia. PDE10A acts postsynaptically on striatal dopamine signaling by regulating neuronal excitability through its inhibition of cyclic adenosine monophosphate (cAMP), and we recently found it to be reduced in schizophrenia compared to controls. Here, this finding of reduced PDE10A in schizophrenia was followed up in the same sample to investigate the effect of reduced striatal PDE10A on the neural and behavioral function of striatal and downstream basal ganglia regions. A positron emission tomography (PET) scan with the PDE10A ligand [11C]Lu AE92686 was performed, followed by a 6 min resting-state magnetic resonance imaging (MRI) scan in ten patients with schizophrenia. To assess the relationship between striatal function and neurophysiological and behavioral functioning, salience processing was assessed using a mismatch negativity paradigm, an auditory event-related electroencephalographic measure, episodic memory was assessed using the Rey auditory verbal learning test (RAVLT) and executive functioning using trail-making test B. Reduced striatal PDE10A was associated with increased amplitude of low-frequency fluctuations (ALFF) within the putamen and substantia nigra, respectively. Higher ALFF in the substantia nigra, in turn, was associated with lower episodic memory performance. The findings are in line with a role for PDE10A in striatal functioning, and suggest that reduced striatal PDE10A may contribute to cognitive symptoms in schizophrenia. -
Peter, M., Chang, F., Pine, J. M., Blything, R., & Rowland, C. F. (2015). When and how do children develop knowledge of verb argument structure? Evidence from verb bias effects in a structural priming task. Journal of Memory and Language, 81, 1-15. doi:10.1016/j.jml.2014.12.002.
Abstract
In this study, we investigated when children develop adult-like verb–structure links, and examined two mechanisms, associative and error-based learning, that might explain how these verb–structure links are learned. Using structural priming, we tested children’s and adults’ ability to use verb–structure links in production in three ways; by manipulating: (1) verb overlap between prime and target, (2) target verb bias, and (3) prime verb bias. Children (aged 3–4 and 5–6 years old) and adults heard and produced double object dative (DOD) and prepositional object dative (PD) primes with DOD- and PD-biased verbs. Although all age groups showed significant evidence of structural priming, only adults showed increased priming when there was verb overlap between prime and target sentences (the lexical boost). The effect of target verb bias also grew with development. Critically, however, the effect of prime verb bias on the size of the priming effect (prime surprisal) was larger in children than in adults, suggesting that verb–structure links are present at the earliest age tested. Taken as a whole, the results suggest that children begin to acquire knowledge about verb-argument structure preferences early in acquisition, but that the ability to use adult-like verb bias in production gradually improves over development. We also argue that this pattern of results is best explained by a learning model that uses an error-based learning mechanism. -
Pettigrew, K. A., Fajutrao Valles, S. F., Moll, K., Northstone, K., Ring, S., Pennell, C., Wang, C., Leavett, R., Hayiou-Thomas, M. E., Thompson, P., Simpson, N. H., Fisher, S. E., The SLI Consortium, Whitehouse, A. J., Snowling, M. J., Newbury, D. F., & Paracchini, S. (2015). Lack of replication for the myosin-18B association with mathematical ability in independent cohorts. Genes, Brain and Behavior, 14(4), 369-376. doi:10.1111/gbb.12213.
Abstract
Twin studies indicate that dyscalculia (or mathematical disability) is caused partly by a genetic component, which is yet to be understood at the molecular level. Recently, a coding variant (rs133885) in the myosin-18B gene was shown to be associated with mathematical abilities with a specific effect among children with dyslexia. This association represents one of the most significant genetic associations reported to date for mathematical abilities and the only one reaching genome-wide statistical significance.
We conducted a replication study in different cohorts to assess the effect of rs133885 maths-related measures. The study was conducted primarily using the Avon Longitudinal Study of Parents and Children (ALSPAC), (N = 3819). We tested additional cohorts including the York Cohort, the Specific Language Impairment Consortium (SLIC) cohort and the Raine Cohort, and stratified them for a definition of dyslexia whenever possible.
We did not observe any associations between rs133885 in myosin-18B and mathematical abilities among individuals with dyslexia or in the general population. Our results suggest that the myosin-18B variant is unlikely to be a main factor contributing to mathematical abilities. -
Piai, V., Roelofs, A., Rommers, J., & Maris, E. (2015). Beta oscillations reflect memory and motor aspects of spoken word production. Human brain mapping, 36(7), 2767-2780. doi:10.1002/hbm.22806.
Abstract
Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band.Additional information
hbm22806-sup-0001-suppinfo1.docxFiles private
Request files -
Piai, V., Roelofs, A., & Roete, I. (2015). Semantic interference in picture naming during dual-task performance does not vary with reading ability. Quarterly Journal of Experimental Psychology, 68(9), 1758-68. doi:10.1080/17470218.2014.985689.
Abstract
Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill. -
Plomp, G., Hervais-Adelman, A., Astolfi, L., & Michel, C. M. (2015). Early recurrence and ongoing parietal driving during elementary visual processing. Scientific Reports, 5: 18733. doi:10.1038/srep18733.
Abstract
Visual stimuli quickly activate a broad network of brain areas that often show reciprocal structural connections between them. Activity at short latencies (<100 ms) is thought to represent a feed-forward activation of widespread cortical areas, but fast activation combined with reciprocal connectivity between areas in principle allows for two-way, recurrent interactions to occur at short latencies after stimulus onset. Here we combined EEG source-imaging and Granger-causal modeling with high temporal resolution to investigate whether recurrent and top-down interactions between visual and attentional brain areas can be identified and distinguished at short latencies in humans. We investigated the directed interactions between widespread occipital, parietal and frontal areas that we localized within participants using fMRI. The connectivity results showed two-way interactions between area MT and V1 already at short latencies. In addition, the results suggested a large role for lateral parietal cortex in coordinating visual activity that may be understood as an ongoing top-down allocation of attentional resources. Our results support the notion that indirect pathways allow early, evoked driving from MT to V1 to highlight spatial locations of motion transients, while influence from parietal areas is continuously exerted around stimulus onset, presumably reflecting task-related attentional processes. -
Postema, M., Carrion Castillo, A., Fisher, S. E., Vingerhoets, G., & Francks, C. (2020). The genetics of situs inversus without primary ciliary dyskinesia. Scientific Reports, 10: 3677. doi:10.1038/s41598-020-60589-z.
Abstract
Situs inversus (SI), a left-right mirror reversal of the visceral organs, can occur with recessive Primary Ciliary Dyskinesia (PCD). However, most people with SI do not have PCD, and the etiology of their condition remains poorly studied. We sequenced the genomes of 15 people with SI, of which six had PCD, as well as 15 controls. Subjects with non-PCD SI in this sample had an elevated rate of left-handedness (five out of nine), which suggested possible developmental mechanisms linking brain and body laterality. The six SI subjects with PCD all had likely recessive mutations in genes already known to cause PCD. Two non-PCD SI cases also had recessive mutations in known PCD genes, suggesting reduced penetrance for PCD in some SI cases. One non-PCD SI case had recessive mutations in PKD1L1, and another in CFAP52 (also known as WDR16). Both of these genes have previously been linked to SI without PCD. However, five of the nine non-PCD SI cases, including three of the left-handers in this dataset, had no obvious monogenic basis for their condition. Environmental influences, or possible random effects in early development, must be considered.Additional information
Supplementary information -
Poulsen, M.-E. (
Ed. ). (2020). The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics.Abstract
Published in September 2020 by the Max Planck Institute for Psycholinguistics to commemorate the arrival and the new beginning of the Jerome Bruner Library in Nijmegen -
St Pourcain, B., Haworth, C. M. A., Davis, O. S. P., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ronald, A., Price, T., Meaburn, E., Ring, S. M., Golding, J., Hakonarson, H., Plomin, R., & Davey Smith, G. (2015). Heritability and genome-wide analyses of problematic peer relationships during childhood and adolescence. Human Genetics, 134(6), 539-551. doi:10.1007/s00439-014-1514-5.
Abstract
Peer behaviour plays an important role in the development of social adjustment, though little is known about its genetic architecture. We conducted a twin study combined with a genome-wide complex trait analysis (GCTA) and a genome-wide screen to characterise genetic influences on problematic peer behaviour during childhood and adolescence. This included a series of longitudinal measures (parent-reported Strengths-and-Difficulties Questionnaire) from a UK population-based birth-cohort (ALSPAC, 4–17 years), and a UK twin sample (TEDS, 4–11 years). Longitudinal twin analysis (TEDS; N ≤ 7,366 twin pairs) showed that peer problems in childhood are heritable (4–11 years, 0.60 < twin-h 2 ≤ 0.71) but genetically heterogeneous from age to age (4–11 years, twin-r g = 0.30). GCTA (ALSPAC: N ≤ 5,608, TEDS: N ≤ 2,691) provided furthermore little support for the contribution of measured common genetic variants during childhood (4–12 years, 0.02 < GCTA-h 2(Meta) ≤ 0.11) though these influences become stronger in adolescence (13–17 years, 0.14 < GCTA-h 2(ALSPAC) ≤ 0.27). A subsequent cross-sectional genome-wide screen in ALSPAC (N ≤ 6,000) focussed on peer problems with the highest GCTA-heritability (10, 13 and 17 years, 0.0002 < GCTA-P ≤ 0.03). Single variant signals (P ≤ 10−5) were followed up in TEDS (N ≤ 2835, 9 and 11 years) and, in search for autism quantitative trait loci, explored within two autism samples (AGRE: N Pedigrees = 793; ACC: N Cases = 1,453/N Controls = 7,070). There was, however, no evidence for association in TEDS and little evidence for an overlap with the autistic continuum. In summary, our findings suggest that problematic peer relationships are heritable but genetically complex and heterogeneous from age to age, with an increase in common measurable genetic variation during adolescence. -
Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Reply to Ravignani and Kotz: Physical impulses from upper-limb movements impact the respiratory–vocal system. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23225-23226. doi:10.1073/pnas.2015452117.
Additional information
This article has a letter -
Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Acoustic information about upper limb movement in voicing. Proceedings of the National Academy of Sciences of the United States of America, 117(21), 11364-11367. doi:10.1073/pnas.2004163117.
Abstract
We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear but not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates
recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.Additional information
This article has a letter by Ravignani and Kotz This article has a reply to Ravignani and Kotz -
Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.
Abstract
Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines. -
Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.
Abstract
Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
ACKNOWLEDGMENTSAdditional information
Link to Preprint on OSF -
Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.
Abstract
We introduce applications of established methods in time-series and network
analysis that we jointly apply here for the kinematic study of gesture
ensembles. We define a gesture ensemble as the set of gestures produced
during discourse by a single person or a group of persons. Here we are
interested in how gestures kinematically relate to one another. We use
a bivariate time-series analysis called dynamic time warping to assess how
similar each gesture is to other gestures in the ensemble in terms of their
velocity profiles (as well as studying multivariate cases with gesture velocity
and speech amplitude envelope profiles). By relating each gesture event to
all other gesture events produced in the ensemble, we obtain a weighted
matrix that essentially represents a network of similarity relationships. We
can therefore apply network analysis that can gauge, for example, how
diverse or coherent certain gestures are with respect to the gesture ensemble.
We believe these analyses promise to be of great value for gesture
studies, as we can come to understand how low-level gesture features
(kinematics of gesture) relate to the higher-order organizational structures
present at the level of discourse.Additional information
Open Data OSF -
Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.
Abstract
The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
body. In the current preregistered study, movements with high physical impact affected phonation in line with
gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.Additional information
Data availability analysis scripts and pre-registration -
Pouw, W., & Looren de Jong, H. (2015). Rethinking situated and embodied social psychology. Theory and Psychology, 25(4), 411-433. doi:10.1177/0959354315585661.
Abstract
This article aims to explore the scope of a Situated and Embodied Social Psychology (ESP). At first sight, social cognition seems embodied cognition par excellence. Social cognition is first and foremost a supra-individual, interactive, and dynamic process (Semin & Smith, 2013). Radical approaches in Situated/Embodied Cognitive Science (Enactivism) claim that social cognition consists in an emergent pattern of interaction between a continuously coupled organism and the (social) environment; it rejects representationalist accounts of cognition (Hutto & Myin, 2013). However, mainstream ESP (Barsalou, 1999, 2008) still takes a rather representation-friendly approach that construes embodiment in terms of specific bodily formatted representations used (activated) in social cognition. We argue that mainstream ESP suffers from vestiges of theoretical solipsism, which may be resolved by going beyond internalistic spirit that haunts mainstream ESP today. -
Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.
Abstract
There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech. -
Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.
Abstract
Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency. -
Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2020). Alignment in multimodal interaction: An integrative framework. Cognitive Science, 44(11): e12911. doi:10.1111/cogs.12911.
Abstract
When people are engaged in social interaction, they can repeat aspects of each other’s communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well‐defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment. -
Rasenberg, M., Rommers, J., & Van Bergen, G. (2020). Anticipating predictability: An ERP investigation of expectation-managing discourse markers in dialogue comprehension. Language, Cognition and Neuroscience, 35(1), 1-16. doi:10.1080/23273798.2019.1624789.
Abstract
n two ERP experiments, we investigated how the Dutch discourse markers eigenlijk “actually”, signalling expectation disconfirmation, and inderdaad “indeed”, signalling expectation confirmation, affect incremental dialogue comprehension. We investigated their effects on the processing of subsequent (un)predictable words, and on the quality of word representations in memory. Participants read dialogues with (un)predictable endings that followed a discourse marker (eigenlijk in Experiment 1, inderdaad in Experiment 2) or a control adverb. We found no strong evidence that discourse markers modulated online predictability effects elicited by subsequently read words. However, words following eigenlijk elicited an enhanced posterior post-N400 positivity compared with words following an adverb regardless of their predictability, potentially reflecting increased processing costs associated with pragmatically driven discourse updating. No effects of inderdaad were found on online processing, but inderdaad seemed to influence memory for (un)predictable dialogue endings. These findings nuance our understanding of how pragmatic markers affect incremental language comprehension.Additional information
plcp_a_1624789_sm6686.docx -
Rasenberg, M., Dingemanse, M., & Ozyurek, A. (2020). Lexical and gestural alignment in interaction and the emergence of novel shared symbols. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 356-358). Nijmegen: The Evolution of Language Conferences. -
Ravignani, A., & Kotz, S. (2020). Breathing, voice and synchronized movement. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23223-23224. doi:10.1073/pnas.2011402117.
Additional information
Pouw_etal_reply.pdf -
Ravignani, A., & Sonnweber, R. (2015). Measuring teaching through hormones and time series analysis: Towards a comparative framework. Behavioral and Brain Sciences, 38, 40-41. doi:10.1017/S0140525X14000806.
Abstract
In response to: How to learn about teaching: An evolutionary framework for the study of teaching behavior in humans and other animals
Arguments about the nature of teaching have depended principally on naturalistic observation and some experimental work. Additional measurement tools, and physiological variations and manipulations can provide insights on the intrinsic structure and state of the participants better than verbal descriptions alone: namely, time-series analysis, and examination of the role of hormones and neuromodulators on the behaviors of teacher and pupil. -
Ravignani, A., Westphal-Fitch, G., Aust, U., Schlumpp, M. M., & Fitch, W. T. (2015). More than one way to see it: Individual heuristics in avian visual computation. Cognition, 143, 13-24. doi:10.1016/j.cognition.2015.05.021.
Abstract
Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species’ ability to process pattern classes or different species’ performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds’ choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally.Additional information
Supplementary data -
Ravignani, A. (2015). Evolving perceptual biases for antisynchrony: A form of temporal coordination beyond synchrony. Frontiers in Neuroscience, 9: 339. doi:10.3389/fnins.2015.00339.
-
Ravignani, A., Barbieri, C., Flaherty, M., Jadoul, Y., Lattenkamp, E. Z., Little, H., Martins, M., Mudd, K., & Verhoef, T. (
Eds. ). (2020). The Evolution of Language: Proceedings of the 13th International Conference (Evolang13). Nijmegen: The Evolution of Language Conferences. doi:10.17617/2.3190925.Additional information
Link to pdf on EvoLang Website -
Raviv, L. (2020). Language and society: How social pressures shape grammatical structure. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). Network structure and the cultural evolution of linguistic structure: A group communication experiment. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 359-361). Nijmegen: The Evolution of Language Conferences. -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). The role of social network structure in the emergence of linguistic structure. Cognitive Science, 44(8): e12876. doi:10.1111/cogs.12876.
Abstract
Social network structure has been argued to shape the structure of languages, as well as affect the spread of innovations and the formation of conventions in the community. Specifically, theoretical and computational models of language change predict that sparsely connected communities develop more systematic languages, while tightly knit communities can maintain high levels of linguistic complexity and variability. However, the role of social network structure in the cultural evolution of languages has never been tested experimentally. Here, we present results from a behavioral group communication study, in which we examined the formation of new languages created in the lab by micro‐societies that varied in their network structure. We contrasted three types of social networks: fully connected, small‐world, and scale‐free. We examined the artificial languages created by these different networks with respect to their linguistic structure, communicative success, stability, and convergence. Results did not reveal any effect of network structure for any measure, with all languages becoming similarly more systematic, more accurate, more stable, and more shared over time. At the same time, small‐world networks showed the greatest variation in their convergence, stabilization, and emerging structure patterns, indicating that network structure can influence the community's susceptibility to random linguistic changes (i.e., drift). -
de Reus, K., Carlson, D., Jadoul, Y., Lowry, A., Gross, S., Garcia, M., Salazar-Casals, A., Rubio-García, A., Haas, C. E., De Boer, B., & Ravignani, A. (2020). Relationships between vocal ontogeny and vocal tract anatomy in harbour seals (Phoca vitulina). In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 63-66). Nijmegen: The Evolution of Language Conferences. -
Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 1783-1794. doi:10.3758/s13428-020-01350-2.
Abstract
In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG’s output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG’s output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.Additional information
data and materials -
Rivero, O., Selten, M. M., Sich, S., Popp, S., Bacmeister, L., Amendola, E., Negwer, M., Schubert, D., Proft, F., Kiser, D., Schmitt, A. G., Gross, C., Kolk, S. M., Strekalova, T., van den Hove, D., Resink, T. J., Nadif Kasri, N., & Lesch, K. P. (2015). Cadherin-13, a risk gene for ADHD and comorbid disorders, impacts GABAergic function in hippocampus and cognition. Translational Psychiatry, 5: e655. doi:10.1038/tp.2015.152.
Abstract
Cadherin-13 (CDH13), a unique glycosylphosphatidylinositol-anchored member of the cadherin family of cell adhesion molecules, has been identified as a risk gene for attention-deficit/hyperactivity disorder (ADHD) and various comorbid neurodevelopmental and psychiatric conditions, including depression, substance abuse, autism spectrum disorder and violent behavior, while the mechanism whereby CDH13 dysfunction influences pathogenesis of neuropsychiatric disorders remains elusive. Here we explored the potential role of CDH13 in the inhibitory modulation of brain activity by investigating synaptic function of GABAergic interneurons. Cellular and subcellular distribution of CDH13 was analyzed in the murine hippocampus and a mouse model with a targeted inactivation of Cdh13 was generated to evaluate how CDH13 modulates synaptic activity of hippocampal interneurons and behavioral domains related to psychopathologic (endo)phenotypes. We show that CDH13 expression in the cornu ammonis (CA) region of the hippocampus is confined to distinct classes of interneurons. Specifically, CDH13 is expressed by numerous parvalbumin and somatostatin-expressing interneurons located in the stratum oriens, where it localizes to both the soma and the presynaptic compartment. Cdh13−/− mice show an increase in basal inhibitory, but not excitatory, synaptic transmission in CA1 pyramidal neurons. Associated with these alterations in hippocampal function, Cdh13−/− mice display deficits in learning and memory. Taken together, our results indicate that CDH13 is a negative regulator of inhibitory synapses in the hippocampus, and provide insights into how CDH13 dysfunction may contribute to the excitatory/inhibitory imbalance observed in neurodevelopmental disorders, such as ADHD and autism. -
Roberts, S. G. (2015). Commentary: Large-scale psychological differences within China explained by rice vs. wheat agriculture. Frontiers in Psychology, 6: 950. doi:10.3389/fpsyg.2015.00950.
Abstract
Talhelm et al. (2014) test the hypothesis that activities which require more intensive collaboration foster more collectivist cultures. They demonstrate that a measure of collectivism correlates with the proportion of cultivated land devoted to rice paddies, which require more work to grow and maintain than other grains. The data come from individual measures of provinces in China. While the data is analyzed carefully, one aspect that is not directly controlled for is the historical relations between these provinces. Spurious correlations can occur between cultural traits that are inherited from ancestor cultures or borrowed through contact, what is commonly known as Galton's problem (Roberts and Winters, 2013). Effectively, Talhelm et al. treat the measures of each province as independent samples, while in reality both farming practices (e.g., Renfrew, 1997; Diamond and Bellwood, 2003; Lee and Hasegawa, 2011) and cultural values (e.g., Currie et al., 2010; Bulbulia et al., 2013) can be inherited or borrowed. This means that the data may be composed of non-independent points, inflating the apparent correlation between rice growing and collectivism. The correlation between farming practices and collectivism may be robust, but this cannot be known without an empirical control for the relatedness of the samples. Talhelm et al. do discuss this problem in the supplementary materials of their paper. They acknowledge that a phylogenetic analysis could be used to control for relatedness, but that at the time of publication there were no genetic or linguistic trees of descent which are detailed enough to suit this purpose. In this commentary I would like to make two points. First, since the original publication, researchers have created new linguistic trees that can provide the needed resolution. For example, the Glottolog phylogeny (Hammarström et al., 2015) has at least three levels of classification for the relevant varieties, though this does not have branch lengths (see also “reference” trees produced in List et al., 2014). Another recently published phylogeny uses lexical data to construct a phylogenetic tree for many language varieties within China (List et al., 2014). In this commentary I use these lexical data to estimate cultural contact between different provinces, and test whether these measures explain variation in rice farming pracices. However, the second point is that Talhelm et al. focus on descent (vertical transmission), while it may be relevant to control for both descent and borrowing (horizontal transmission). In this case, all that is needed is some measure of cultural contact between groups, not necessarily a unified tree of descent. I use a second source of linguistic data to calculate simple distances between languages based directly on the lexicon. These distances reflect borrowing as well as descent. -
Roberts, S. G., Winters, J., & Chen, K. (2015). Future tense and economic decisions: Controlling for cultural evolution. PLoS One, 10(7): e0132145. doi:10.1371/journal.pone.0132145.
Abstract
A previous study by Chen demonstrates a correlation between languages that grammatically mark future events and their speakers' propensity to save, even after controlling for numerous economic and demographic factors. The implication is that languages which grammatically distinguish the present and the future may bias their speakers to distinguish them psychologically, leading to less future-oriented decision making. However, Chen's original analysis assumed languages are independent. This neglects the fact that languages are related, causing correlations to appear stronger than is warranted (Galton's problem). In this paper, we test the robustness of Chen's correlations to corrections for the geographic and historical relatedness of languages. While the question seems simple, the answer is complex. In general, the statistical correlation between the two variables is weaker when controlling for relatedness. When applying the strictest tests for relatedness, and when data is not aggregated across individuals, the correlation is not significant. However, the correlation did remain reasonably robust under a number of tests. We argue that any claims of synchronic patterns between cultural variables should be tested for spurious correlations, with the kinds of approaches used in this paper. However, experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies. -
Roberts, S. G., Everett, C., & Blasi, D. (2015). Exploring potential climate effects on the evolution of human sound systems. In H. Little (
Ed. ), Proceedings of the 18th International Congress of Phonetic Sciences [ICPhS 2015] Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 14-19). Glasgow: ICPHS.Abstract
We suggest that it is now possible to conduct research on a topic which might be called evolutionary geophonetics. The main question is how the climate influences the evolution of language. This involves biological adaptations to the climate that may affect biases in production and perception; cultural evolutionary adaptations of the sounds of a language to climatic conditions; and influences of the climate on language diversity and contact. We discuss these ideas with special reference to a recent hypothesis that lexical tone is not adaptive in dry climates (Everett, Blasi & Roberts, 2015). -
Roberts, S. G., Torreira, F., & Levinson, S. C. (2015). The effects of processing and sequence organisation on the timing of turn taking: A corpus study. Frontiers in Psychology, 6: 509. doi:10.3389/fpsyg.2015.00509.
Abstract
The timing of turn taking in conversation is extremely rapid given the cognitive demands on speakers to comprehend, plan and execute turns in real time. Findings from psycholinguistics predict that the timing of turn taking is influenced by demands on processing, such as word frequency or syntactic complexity. An alternative view comes from the field of conversation analysis, which predicts that the rules of turn-taking and sequence organization may dictate the variation in gap durations (e.g. the functional role of each turn in communication). In this paper, we estimate the role of these two different kinds of factors in determining the speed of turn-taking in conversation. We use the Switchboard corpus of English telephone conversation, already richly annotated for syntactic structure speech act sequences, and segmental alignment. To this we add further information including Floor Transfer Offset (the amount of time between the end of one turn and the beginning of the next), word frequency, concreteness, and surprisal values. We then apply a novel statistical framework ('random forests') to show that these two dimensions are interwoven together with indexical properties of the speakers as explanatory factors determining the speed of response. We conclude that an explanation of the of the timing of turn taking will require insights from both processing and sequence organisation.Additional information
http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00509/abstract -
Rodd, J., Bosker, H. R., Ernestus, M., Alday, P. M., Meyer, A. S., & Ten Bosch, L. (2020). Control of speaking rate is achieved by switching between qualitatively distinct cognitive ‘gaits’: Evidence from simulation. Psychological Review, 127(2), 281-304. doi:10.1037/rev0000172.
Abstract
That speakers can vary their speaking rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: When walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech “gaits” that resemble walking and running? Or is control achieved by continuous modulation of a single gait? This study investigates these possibilities through simulations of a new connectionist computational model of the cognitive process of speech production, EPONA, that borrows from Dell, Burger, and Svec’s (1997) model. The model has parameters that can be adjusted to fit the temporal characteristics of speech at different speaking rates. We trained the model on a corpus of disyllabic Dutch words produced at different speaking rates. During training, different clusters of parameter values (regimes) were identified for different speaking rates. In a 1-gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speaking rates. The parameter settings associated with each speaking rate were not linearly related, suggesting the presence of cognitive gaits. Thus, we provide the first computationally explicit account of the ability to modulate the speech production system to achieve different speaking styles.Additional information
Supplemental material -
Rodd, J. (2020). How speaking fast is like running: Modelling control of speaking rate. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Rodenas-Cuadrado, P., Chen, X. S., Wiegrebe, L., Firzlaff, U., & Vernes, S. C. (2015). A novel approach identifies the first transcriptome networks in bats: A new genetic model for vocal communication. BMC Genomics, 16: 836. doi:10.1186/s12864-015-2068-1.
Abstract
Background Bats are able to employ an astonishingly complex vocal repertoire for navigating their environment and conveying social information. A handful of species also show evidence for vocal learning, an extremely rare ability shared only with humans and few other animals. However, despite their potential for the study of vocal communication, bats remain severely understudied at a molecular level. To address this fundamental gap we performed the first transcriptome profiling and genetic interrogation of molecular networks in the brain of a highly vocal bat species, Phyllostomus discolor. Results Gene network analysis typically needs large sample sizes for correct clustering, this can be prohibitive where samples are limited, such as in this study. To overcome this, we developed a novel bioinformatics methodology for identifying robust co-expression gene networks using few samples (N=6). Using this approach, we identified tissue-specific functional gene networks from the bat PAG, a brain region fundamental for mammalian vocalisation. The most highly connected network identified represented a cluster of genes involved in glutamatergic synaptic transmission. Glutamatergic receptors play a significant role in vocalisation from the PAG, suggesting that this gene network may be mechanistically important for vocal-motor control in mammals. Conclusion We have developed an innovative approach to cluster co-expressing gene networks and show that it is highly effective in detecting robust functional gene networks with limited sample sizes. Moreover, this work represents the first gene network analysis performed in a bat brain and establishes bats as a novel, tractable model system for understanding the genetics of vocal mammalian communication.Additional information
Raw reads from the RNA sequencing in NCBI bioproject repository -
Rojas-Berscia, L. M. (2015). Mayna, the lost Kawapanan language. LIAMES, 15, 393-407. Retrieved from http://revistas.iel.unicamp.br/index.php/liames/article/view/4549.
Abstract
The origins of the Mayna language, formerly spoken in northwest Peruvian Amazonia, remain a mystery for most scholars. Several discussions on it took place in the end of the 19th century and the beginning of the 20th; however, none arrived at a consensus. Apart from an article written by Taylor & Descola (1981), suggesting a relationship with the Jivaroan language family, little to nothing has been said about it for the last half of the 20th century and the last decades. In the present article, a summary of the principal accounts on the language and its people between the 19th and the 20th century will be given, followed by a corpus analysis in which the materials available in Mayna and Kawapanan, mainly prayers collected by Hervás (1787) and Teza (1868), will be analysed and compared for the first time in light of recent analyses in the new-born field called Kawapanan linguistics (Barraza de García 2005a,b; Valenzuela-Bismarck 2011a,b , Valenzuela 2013; Rojas-Berscia 2013, 2014; Madalengoitia-Barúa 2013; Farfán-Reto 2012), in order to test its affiliation to the Kawapanan language family, as claimed by Beuchat & Rivet (1909) and account for its place in the dialectology of this language family. -
Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
-
Rojas-Berscia, L. M., Napurí, A., & Wang, L. (2020). Shawi (Chayahuita). Journal of the International Phonetic Association, 50(3), 417-430. doi:10.1017/S0025100318000415.
Abstract
Shawi1 is the language of the indigenous Shawi/Chayahuita people in Northwestern Amazonia, Peru. It belongs to the Kawapanan language family, together with its moribund sister language, Shiwilu. It is spoken by about 21,000 speakers (see Rojas-Berscia 2013) in the provinces of Alto Amazonas and Datem del Marañón in the region of Loreto and in the northern part of the region of San Martín, being one of the most vital languages in the country (see Figure 1).2 Although Shawi groups in the Upper Amazon were contacted by Jesuit missionaries during colonial times, the maintenance of their customs and language is striking. To date, most Shawi children are monolingual and have their first contact with Spanish at school. Yet, due to globalisation and the construction of highways by the Peruvian government, many Shawi villages are progressively westernising. This may result in the imminent loss of their indigenous culture and language.Additional information
Supplementary material -
Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.
Abstract
During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing. -
Romøren, A. S. H., & Chen, A. (2015). Quiet is the new loud: Pausing and focus in child and adult Dutch. Language and Speech, 58, 8-23. doi:10.1177/0023830914563589.
Abstract
In a number of languages, prosody is used to highlight new information (or focus). In Dutch, focus is marked by accentuation, whereby focal constituents are accented and post-focal constituents are de-accented. Even if pausing is not traditionally seen as a cue to focus in Dutch, several previous studies have pointed to a possible relationship between pausing and information structure. Considering that Dutch-speaking 4 to 5 year olds are not yet completely proficient in using accentuation for focus and that children generally pause more than adults, we asked whether pausing might be an available parameter for children to manipulate for focus. Sentences with varying focus structure were elicited from 10 Dutch-speaking 4 to 5 year olds and 9 Dutch-speaking adults by means of a picture-matching game. Comparing pause durations before focal and non-focal targets showed pre-target pauses to be significantly longer when the targets were focal than when they were not. Notably, the use of pausing was more robust in the children than in the adults, suggesting that children exploit pausing to mark focus more generally than adults do, at a stage where their mastery of the canonical cues to focus is still developing. © The Author(s) 2015Files private
Request files -
Roos, N. M., & Piai, V. (2020). Across‐session consistency of context‐driven language processing: A magnetoencephalography study. European Journal of Neuroscience, 52, 3457-3469. doi:10.1111/ejn.14785.
Abstract
Changes in brain organization following damage are commonly observed, but they remain poorly understood. These changes are often studied with imaging techniques that overlook the temporal granularity at which language processes occur. By contrast, electrophysiological measures provide excellent temporal resolution. To test the suitability of magnetoencephalography (MEG) to track language-related neuroplasticity, the present study aimed at establishing the spectro-temporo-spatial across-session consistency of context-driven picture naming in healthy individuals, using MEG in two test–retest sessions. Spectro-temporo-spatial test–retest consistency in a healthy population is a prerequisite for studying neuronal changes in clinical populations over time. For this purpose, 15 healthy speakers were tested with MEG while performing a context-driven picture-naming task at two time points. Participants read a sentence missing the final word and named a picture completing the sentence. Sentences were constrained or unconstrained toward the picture, such that participants could either retrieve the picture name through sentence context (constrained sentences), or could only name it after the picture appeared (unconstrained sentences). The context effect (constrained versus unconstrained) in picture-naming times had a strong effect size and high across-session consistency. The context MEG results revealed alpha–beta power decreases (10–20 Hz) in the left temporal and inferior parietal lobule that were consistent across both sessions. As robust spectro-temporo-spatial findings in a healthy population are required for working toward longitudinal patient studies, we conclude that using context-driven language production and MEG is a suitable way to examine language-related neuroplasticity after brain damage. -
Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.
Abstract
This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.Additional information
http://www.degruyter.com/view/j/opli.2014.1.issue-1/opli-2015-0002/suppl/opli-2… -
Rossi, G. (2020). Other-repetition in conversation across languages: Bringing prosody into pragmatic typology. Language in Society, 49(4), 495-520. doi:10.1017/S0047404520000251.
Abstract
In this article, I introduce the aims and scope of a project examining other-repetition in natural conversation. This introduction provides the conceptual and methodological background for the five language-specific studies contained in this special issue, focussing on other-repetition in English, Finnish, French, Italian, and Swedish. Other-repetition is a recurrent conversational phenomenon in which a speaker repeats all or part of what another speaker has just said, typically in the next turn. Our project focusses particularly on other-repetitions that problematise what is being repeated and typically solicit a response. Previous research has shown that such repetitions can accomplish a range of conversational actions. But how do speakers of different languages distinguish these actions? In addressing this question, we put at centre stage the resources of prosody—the nonlexical acoustic-auditory features of speech—and bring its systematic analysis into the growing field of pragmatic typology—the comparative study of language use and conversational structure. -
Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.
Abstract
Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion. -
Rossi, G. (2020). The prosody of other-repetition in Italian: A system of tunes. Language in Society, 49(4), 619-652. doi:10.1017/S0047404520000627.
Abstract
As part of the project reported on in this special issue, the present study provides an overview of the types of action accomplished by other-repetition in Italian, with particular reference to the variety of the language spoken in the northeastern province of Trento. The analysis surveys actions within the domain of initiating repair, actions that extend beyond initiating repair, and actions that are alternative to initiating repair. Pitch contour emerges as a central design feature of other-repetition in Italian, with six nuclear contours associated with distinct types of action, sequential trajectories, and response patterns. The study also documents the interplay of pitch contour with other prosodic features (pitch span and register) and visible behavior (head nods, eyebrow movements).Additional information
Sound clips.zip -
Rossi, G. (2015). The request system in Italian interaction. PhD Thesis, Radboud University, Nijmegen.
Abstract
People across the world make requests every day. We constantly rely on others to get by in the small and big practicalities of everyday life, be it getting the salt, moving a sofa, or cooking a meal. It has long been noticed that when we ask others for help we use a wide range of forms drawing on various resources afforded by our language and body. To get another to pass the salt, for example, we may say ‘Pass the salt’, or ask ‘Can you pass me the salt?’, or simply point to the salt. What do different forms of requesting give us? The short answer is that they allow us to manage different social relations. But what kind of relations? While prior research has mostly emphasised the role of long-term asymmetries like people’s social distance and relative power, this thesis puts at centre stage social relations and dimensions emerging in the moment-by-moment flow of everyday interaction. These include how easy or hard the action requested is to anticipate for the requestee, whether the action requested contributes to a joint project or serves an individual one, whether the requestee may be unwilling to do it, and how obvious or equivocal it is that a certain person or another should be involved in the action. The study focuses on requests made in everyday informal interactions among speakers of Italian. It involves over 500 instances of requests sampled from a diverse corpus of video recordings, and draws on methods from conversation analysis, linguistics and multimodal analysis. A qualitative analysis of the data is supported by quantitative measures of the distribution of linguistic and interactional features, and by the use of inferential statistics to test the generalizability of some of the patterns observed. The thesis aims to contribute to our understanding of both language and social interaction by showing that forms of requesting constitute a system, organised by a set of recurrent social-interactional concerns.Additional information
full text via Radboud Repository -
Rowbotham, S., Lloyd, D. M., Holler, J., & Wearden, A. (2015). Externalizing the private experience of pain: A role for co-speech gestures in pain communication? Health Communication, 30(1), 70-80. doi:10.1080/10410236.2013.836070.
Abstract
Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced “co-speech hand gestures” may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication -
Rowland, C. F., Theakston, A. L., Ambridge, B., & Twomey, K. E. (
Eds. ). (2020). Current Perspectives on Child Language Acquisition: How children use their environment to learn. Amsterdam: John Benjamins. doi:10.1075/tilar.27.Abstract
In recent years the field has seen an increasing realisation that the full complexity of language acquisition demands theories that (a) explain how children integrate information from multiple sources in the environment, (b) build linguistic representations at a number of different levels, and (c) learn how to combine these representations in order to communicate effectively. These new findings have stimulated new theoretical perspectives that are more centered on explaining learning as a complex dynamic interaction between the child and her environment. This book is the first attempt to bring some of these new perspectives together in one place. It is a collection of essays written by a group of researchers who all take an approach centered on child-environment interaction, and all of whom have been influenced by the work of Elena Lieven, to whom this collection is dedicated. -
Rowland, C. F. (2020). Introduction. In M. E. Poulsen (
Ed. ), The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics. -
Rowland, C. F., & Peter, M. (2015). Up to speed? Nursery World Magazine, 15-28 June 2015, 18-20.
-
Rubio-Fernández, P., & Jara-Ettinger, J. (2020). Incrementality and efficiency shape pragmatics across languages. Proceedings of the National Academy of Sciences, 117, 13399-13404. doi:10.1073/pnas.1922067117.
Abstract
To correctly interpret a message, people must attend to the context in which it was produced. Here we investigate how this process, known as pragmatic reasoning, is guided by two universal forces in human communication: incrementality and efficiency, with speakers of all languages interpreting language incrementally and making the most efficient use of the incoming information. Crucially, however, the interplay between these two forces results in speakers of different languages having different pragmatic information available at each point in processing, including inferences about speaker intentions. In particular, the position of adjectives relative to nouns (e.g., “black lamp” vs. “lamp black”) makes visual context information available in reverse orders. In an eye-tracking study comparing four unrelated languages that have been understudied with regard to language processing (Catalan, Hindi, Hungarian, and Wolof), we show that speakers of languages with an adjective–noun order integrate context by first identifying properties (e.g., color, material, or size), whereas speakers of languages with a noun–adjective order integrate context by first identifying kinds (e.g., lamps or chairs). Most notably, this difference allows listeners of adjective–noun descriptions to infer the speaker’s intention when using an adjective (e.g., “the black…” as implying “not the blue one”) and anticipate the target referent, whereas listeners of noun–adjective descriptions are subject to temporary ambiguity when deriving the same interpretation. We conclude that incrementality and efficiency guide pragmatic reasoning across languages, with different word orders having different pragmatic affordances. -
Rubio-Fernández, P., Wearing, C., & Carston, R. (2015). Metaphor and hyperbole: Testing the continuity hypothesis. Metaphor and Symbol, 30(1), 24-40. doi:10.1080/10926488.2015.980699.
Abstract
In standard Relevance Theory, hyperbole and metaphor are categorized together as loose uses of language, on a continuum with approximations, category extensions and other cases of loosening/broadening of meaning. Specifically, it is claimed that there are no interesting differences (in either interpretation or processing) between hyperbolic and metaphorical uses (Sperber & Wilson, 2008). In recent work, we have set out to provide a more fine-grained articulation of the similarities and differences between hyperbolic and metaphorical uses and their relation to literal uses (Carston & Wearing, 2011, 2014). We have defended the view that hyperbolic use involves a shift of magnitude along a dimension which is intrinsic to the encoded meaning of the hyperbole vehicle, while metaphor involves a multi-dimensional qualitative shift away from the encoded meaning of the metaphor vehicle. In this article, we present three experiments designed to test the predictions of this analysis, using a variety of tasks (paraphrase elicitation, self-paced reading and sentence verification). The results of the study support the view that hyperbolic and metaphorical interpretations, despite their commonalities as loose uses of language, are significantly different. -
De Ruiter, L. E. (2015). Information status marking in spontaneous vs. read speech in story-telling tasks – Evidence from intonation analysis using GToBI. Journal of Phonetics, 48, 29-44. doi:10.1016/j.wocn.2014.10.008.
Abstract
Two studies investigated whether speaking mode influences the way German speakers mark the information status of discourse referents in nuclear position. In Study 1, speakers produced narrations spontaneously on the basis of picture stories in which the information status of referents (new, accessible and given) was systematically varied. In Study 2, speakers saw the same pictures, but this time accompanied by text to be read out. Clear differences were found depending on speaking mode: In spontaneous speech, speakers always accented new referents. They did not use different pitch accent types to differentiate between new and accessible referents, nor did they always deaccent given referents. In addition, speakers often made use of low pitch accents in combination with high boundary tones to indicate continuity. In contrast to this, read speech was characterized by low boundary tones, consistent deaccentuation of given referents and the use of H+L* and H+!H* accents, for both new and accessible referents. The results are discussed in terms of the function of intonational features in communication. It is argued that reading intonation is not comparable to intonation in spontaneous speech, and that this has important consequences also for our choice of methodology in child language acquisition research -
Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2015). Emotional context modulates embodied metaphor comprehension. Neuropsychologia, 78, 108-114. doi:10.1016/j.neuropsychologia.2015.10.003.
Abstract
Emotions are often expressed metaphorically, and both emotion and metaphor are ways through which abstract meaning can be grounded in language. Here we investigate specifically whether motion-related verbs when used metaphorically are differentially sensitive to a preceding emotional context, as compared to when they are used in a literal manner. Participants read stories that ended with ambiguous action/motion sentences (e.g., he got it), in which the action/motion could be interpreted metaphorically (he understood the idea) or literally (he caught the ball) depending on the preceding story. Orthogonal to the metaphorical manipulation, the stories were high or low in emotional content. The results showed that emotional context modulated the neural response in visual motion areas to the metaphorical interpretation of the sentences, but not to their literal interpretations. In addition, literal interpretations of the target sentences led to stronger activation in the visual motion areas as compared to metaphorical readings of the sentences. We interpret our results as suggesting that emotional context specifically modulates mental simulation during metaphor processing -
San Roque, L., & Bergvist, H. (
Eds. ). (2015). Epistemic marking in typological perspective [Special Issue]. STUF -Language typology and universals, 68(2). -
San Roque, L. (2015). Using you to get to me: Addressee perspective and speaker stance in Duna evidential marking. STUF: Language typology and universals, 68(2), 187-210. doi:10.1515/stuf-2015-0010.
Abstract
Languages have complex and varied means for representing points of view, including constructions that can express multiple perspectives on the same event. This paper presents data on two evidential constructions in the language Duna (Papua New Guinea) that imply features of both speaker and addressee knowledge simultaneously. I discuss how talking about an addressee’s knowledge can occur in contexts of both coercion and co-operation, and, while apparently empathetic, can provide a covert way to both manipulate the addressee’s attention and express speaker stance. I speculate that ultimately, however, these multiple perspective constructions may play a pro-social role in building or repairing the interlocutors’ common ground. -
San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.
Abstract
To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception. -
Schaefer, M., Haun, D. B., & Tomasello, M. (2015). Fair is not fair everywhere. Psychological Science, 26(8), 1252-1260. doi:10.1177/0956797615586188.
Abstract
Distributing the spoils of a joint enterprise on the basis of work contribution or relative productivity seems natural to the modern Western mind. But such notions of merit-based distributive justice may be culturally constructed norms that vary with the social and economic structure of a group. In the present research, we showed that children from three different cultures have very different ideas about distributive justice. Whereas children from a modern Western society distributed the spoils of a joint enterprise precisely in proportion to productivity, children from a gerontocratic pastoralist society in Africa did not take merit into account at all. Children from a partially hunter-gatherer, egalitarian African culture distributed the spoils more equally than did the other two cultures, with merit playing only a limited role. This pattern of results suggests that some basic notions of distributive justice are not universal intuitions of the human species but rather culturally constructed behavioral norms.Additional information
http://pss.sagepub.com/content/by/supplemental-data -
Scharenborg, O., Ondel, L., Palaskar, S., Arthur, P., Ciannella, F., Du, M., Larsen, E., Merkx, D., Riad, R., Wang, L., Dupoux, E., Besacier, L., Black, A., Hasegawa-Johnson, M., Metze, F., Neubig, G., Stüker, S., Godard, P., & Müller, M. (2020). Speech technology for unwritten languages. IEEE/ACM Transactions on Audio, Speech and Language Processing, 28, 964-975. doi:10.1109/TASLP.2020.2973896.
Abstract
Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible. -
Scharenborg, O., Weber, A., & Janse, E. (2015). Age and hearing loss and the use of acoustic cues in fricative categorization. The Journal of the Acoustical Society of America, 138(3), 1408-1417. doi:10.1121/1.4927728.
Abstract
This study examined the use of fricative noise information and coarticulatory cues for categorization of word-final fricatives [s] and [f] by younger and older Dutch listeners alike. Particularly, the effect of information loss in the higher frequencies on the use of these two cues for fricative categorization was investigated. If information in the higher frequencies is less strongly available, fricative identification may be impaired or listeners may learn to focus more on coarticulatory information. The present study investigates this second possibility. Phonetic categorization results showed that both younger and older Dutch listeners use the primary cue fricative noise and the secondary cue coarticulatory information to distinguish
word-final [f] from [s]. Individual hearing sensitivity in the older listeners modified the use of fricative noise information, but did not modify the use of coarticulatory information. When high-frequency information was filtered out from the speech signal, fricative noise could no longer be used by the younger and older adults. Crucially, they also did not learn to rely more on coarticulatory information as a compensatory cue for fricative categorization. This suggests that listeners do not readily show compensatory use of this secondary cue to fricative identity when fricative categorization becomes difficult. -
Scharenborg, O., Weber, A., & Janse, E. (2015). The role of attentional abilities in lexically guided perceptual learning by older listeners. Attention, Perception & Psychophysics, 77(2), 493-507. doi:10.3758/s13414-014-0792-2.
Abstract
This study investigates two variables that may modify lexically-guided perceptual learning: individual hearing sensitivity and attentional abilities. Older Dutch listeners (aged 60+, varying from good hearing to mild-to-moderate high-frequency hearing loss) were tested on a lexically-guided perceptual learning task using the contrast [f]-[s]. This contrast mainly differentiates between the two consonants in the higher frequencies, and thus is supposedly challenging for listeners with hearing loss. The analyses showed that older listeners generally engage in lexically-guided perceptual learning. Hearing loss and selective attention did not modify perceptual learning in our participant sample, while attention-switching control did: listeners with poorer attention-switching control showed a stronger perceptual learning effect. We postulate that listeners with better attention-switching control may, in general, rely more strongly on bottom-up acoustic information compared to listeners with poorer attention-switching control, making them in turn less susceptible to lexically-guided perceptual learning effects. Our results, moreover, clearly show that lexically-guided perceptual learning is not lost when acoustic processing is less accurate. -
Schepens, J. (2015). Bridging linguistic gaps: The effects of linguistic distance on adult learnability of Dutch as an additional language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Schijven, D., Stevelink, R., McCormack, M., van Rheenen, W., Luykx, J. J., Koeleman, B. P., Veldink, J. H., Project MinE ALS GWAS Consortium, & International League Against Epilepsy Consortium on Complex Epilepsies (2020). Analysis of shared common genetic risk between amyotrophic lateral sclerosis and epilepsy. Neurobiology of Aging, 92, 153.e1-153.e5. doi:10.1016/j.neurobiolaging.2020.04.011.
Abstract
Because hyper-excitability has been shown to be a shared pathophysiological mechanism, we used the latest and largest genome-wide studies in amyotrophic lateral sclerosis (n = 36,052) and epilepsy (n = 38,349) to determine genetic overlap between these conditions. First, we showed no significant genetic correlation, also when binned on minor allele frequency. Second, we confirmed the absence of polygenic overlap using genomic risk score analysis. Finally, we did not identify pleiotropic variants in meta-analyses of the 2 diseases. Our findings indicate that amyotrophic lateral sclerosis and epilepsy do not share common genetic risk, showing that hyper-excitability in both disorders has distinct origins.Additional information
1-s2.0-S0197458020301305-mmc1.docx -
Schijven, D., Veldink, J. H., & Luykx, J. J. (2020). Genetic cross-disorder analysis in psychiatry: from methodology to clinical utility. The British Journal of Psychiatry, 216(5), 246-249. doi:10.1192/bjp.2019.72.
Abstract
SummaryGenome-wide association studies have uncovered hundreds of loci associated with psychiatric disorders. Cross-disorder studies are among the prime ramifications of such research. Here, we discuss the methodology of the most widespread methods and their clinical utility with regard to diagnosis, prediction, disease aetiology and treatment in psychiatry.Declaration of interestNone. -
Schijven, D., Zinkstok, J. R., & Luykx, J. J. (2020). Van genetische bevindingen naar de klinische praktijk van de psychiater: Hoe genetica precisiepsychiatrie mogelijk kan maken. Tijdschrift voor Psychiatrie, 62(9), 776-783.
Files private
Request files -
Schiller, N. O., & Verdonschot, R. G. (2015). Accessing words from the mental lexicon. In J. Taylor (
Ed. ), The Oxford handbook of the word (pp. 481-492). Oxford: Oxford University Press.Abstract
This chapter describes how speakers access words from the mental lexicon. Lexical access is a crucial
component in the process of transforming thoughts into speech. Some theories consider lexical access to be
strictly serial and discrete, while others view this process as being cascading or even interactive, i.e. the different
sub-levels influence each other. We discuss some of the evidence in favour and against these viewpoints, and
also present arguments regarding the ongoing debate on how words are selected for production. Another important
issue concerns the access to morphologically complex words such as derived and inflected words, as well as
compounds. Are these accessed as whole entities from the mental lexicon or are the parts assembled online? This
chapter tries to provide an answer to that question as well. -
Schluessel, V., & Düngen, D. (2015). Irrespective of size, scales, color or body shape, all fish are just fish: object categorization in the gray bamboo shark Chiloscyllium griseum. Animal Cognition, 18, 497-507. doi:10.1007/s10071-014-0818-0.
Abstract
Object categorization is an important cognitive adaptation, quickly providing an animal with relevant and potentially life-saving information. It can be defined as the process whereby objects that are not the same, are nonetheless grouped together according to some defining feature(s) and responded to as if they were the same. In this way, knowledge about one object, behavior or situation can be extrapolated onto another without much cost and effort. Many vertebrates including humans, monkeys, birds and teleosts have been shown to be able to categorize, with abilities varying between species and tasks. This study assessed object categorization skills in the gray bamboo shark Chiloscyllium griseum. Sharks learned to distinguish between the two categories, 'fish' versus 'snail' independently of image features and image type, i.e., black and white drawings, photographs, comics or negative images. Transfer tests indicated that sharks predominantly focused on and categorized the positive stimulus, while disregarding the negative stimulus.
Share this page