Displaying 401 - 500 of 582
-
Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Acoustic information about upper limb movement in voicing. Proceedings of the National Academy of Sciences of the United States of America, 117(21), 11364-11367. doi:10.1073/pnas.2004163117.
Abstract
We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear but not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates
recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.Additional information
This article has a letter by Ravignani and Kotz This article has a reply to Ravignani and Kotz -
Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.
Abstract
Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines. -
Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.
Abstract
Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
ACKNOWLEDGMENTSAdditional information
Link to Preprint on OSF -
Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.
Abstract
We introduce applications of established methods in time-series and network
analysis that we jointly apply here for the kinematic study of gesture
ensembles. We define a gesture ensemble as the set of gestures produced
during discourse by a single person or a group of persons. Here we are
interested in how gestures kinematically relate to one another. We use
a bivariate time-series analysis called dynamic time warping to assess how
similar each gesture is to other gestures in the ensemble in terms of their
velocity profiles (as well as studying multivariate cases with gesture velocity
and speech amplitude envelope profiles). By relating each gesture event to
all other gesture events produced in the ensemble, we obtain a weighted
matrix that essentially represents a network of similarity relationships. We
can therefore apply network analysis that can gauge, for example, how
diverse or coherent certain gestures are with respect to the gesture ensemble.
We believe these analyses promise to be of great value for gesture
studies, as we can come to understand how low-level gesture features
(kinematics of gesture) relate to the higher-order organizational structures
present at the level of discourse.Additional information
Open Data OSF -
Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.
Abstract
The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
body. In the current preregistered study, movements with high physical impact affected phonation in line with
gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.Additional information
Data availability analysis scripts and pre-registration -
Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.
Abstract
There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech. -
Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.
Abstract
Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency. -
Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2020). Alignment in multimodal interaction: An integrative framework. Cognitive Science, 44(11): e12911. doi:10.1111/cogs.12911.
Abstract
When people are engaged in social interaction, they can repeat aspects of each other’s communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well‐defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment. -
Rasenberg, M., Rommers, J., & Van Bergen, G. (2020). Anticipating predictability: An ERP investigation of expectation-managing discourse markers in dialogue comprehension. Language, Cognition and Neuroscience, 35(1), 1-16. doi:10.1080/23273798.2019.1624789.
Abstract
n two ERP experiments, we investigated how the Dutch discourse markers eigenlijk “actually”, signalling expectation disconfirmation, and inderdaad “indeed”, signalling expectation confirmation, affect incremental dialogue comprehension. We investigated their effects on the processing of subsequent (un)predictable words, and on the quality of word representations in memory. Participants read dialogues with (un)predictable endings that followed a discourse marker (eigenlijk in Experiment 1, inderdaad in Experiment 2) or a control adverb. We found no strong evidence that discourse markers modulated online predictability effects elicited by subsequently read words. However, words following eigenlijk elicited an enhanced posterior post-N400 positivity compared with words following an adverb regardless of their predictability, potentially reflecting increased processing costs associated with pragmatically driven discourse updating. No effects of inderdaad were found on online processing, but inderdaad seemed to influence memory for (un)predictable dialogue endings. These findings nuance our understanding of how pragmatic markers affect incremental language comprehension.Additional information
plcp_a_1624789_sm6686.docx -
Rasenberg, M., Dingemanse, M., & Ozyurek, A. (2020). Lexical and gestural alignment in interaction and the emergence of novel shared symbols. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 356-358). Nijmegen: The Evolution of Language Conferences. -
Ravignani, A., & Kotz, S. (2020). Breathing, voice and synchronized movement. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23223-23224. doi:10.1073/pnas.2011402117.
Additional information
Pouw_etal_reply.pdf -
Ravignani, A., Barbieri, C., Flaherty, M., Jadoul, Y., Lattenkamp, E. Z., Little, H., Martins, M., Mudd, K., & Verhoef, T. (
Eds. ). (2020). The Evolution of Language: Proceedings of the 13th International Conference (Evolang13). Nijmegen: The Evolution of Language Conferences. doi:10.17617/2.3190925.Additional information
Link to pdf on EvoLang Website -
Raviv, L. (2020). Language and society: How social pressures shape grammatical structure. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). Network structure and the cultural evolution of linguistic structure: A group communication experiment. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 359-361). Nijmegen: The Evolution of Language Conferences. -
Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). The role of social network structure in the emergence of linguistic structure. Cognitive Science, 44(8): e12876. doi:10.1111/cogs.12876.
Abstract
Social network structure has been argued to shape the structure of languages, as well as affect the spread of innovations and the formation of conventions in the community. Specifically, theoretical and computational models of language change predict that sparsely connected communities develop more systematic languages, while tightly knit communities can maintain high levels of linguistic complexity and variability. However, the role of social network structure in the cultural evolution of languages has never been tested experimentally. Here, we present results from a behavioral group communication study, in which we examined the formation of new languages created in the lab by micro‐societies that varied in their network structure. We contrasted three types of social networks: fully connected, small‐world, and scale‐free. We examined the artificial languages created by these different networks with respect to their linguistic structure, communicative success, stability, and convergence. Results did not reveal any effect of network structure for any measure, with all languages becoming similarly more systematic, more accurate, more stable, and more shared over time. At the same time, small‐world networks showed the greatest variation in their convergence, stabilization, and emerging structure patterns, indicating that network structure can influence the community's susceptibility to random linguistic changes (i.e., drift). -
Reesink, G. (2002). The Eastern bird's head languages. In G. Reesink (
Ed. ), Languages of the Eastern Bird's Head (pp. 1-44). Canberra: Pacific Linguistics. -
Reesink, G. (2002). A grammar sketch of Sougb. In G. Reesink (
Ed. ), Languages of the Eastern Bird's Head (pp. 181-275). Canberra: Pacific Linguistics. -
Reesink, G. (2002). Clause-final negation, structure and interpretation. Functions of Language, 9(2), 239-268.
Abstract
Negation in a number of Austronesian and Papuan languages with SVO order is expressed by a rather rigid clause-final position of the negative adverb. Some typological generalizations for negation are reviewed and the distribution of this trait in languages of different stocks is discussed, arguing that it most likely originates in Papuan languages. Some proposals for different types of negation, such as whether it is a verbal (or VP) operator, a constituent operator or a sentential operator are considered. The problem of determining the scope of negation is discussed, with the conclusion that hard and fast semantic meanings for NEG at different structural levels cannot be posited, suggesting that perhaps a solution can be found in the application of some universal pragmatic principles. -
Reesink, G. (2002). Mansim, a lost language of the Bird's Head. In G. Reesink (
Ed. ), Languages of the Eastern Bird's Head (pp. 277-340). Canberra: Pacific Linguistics. -
de Reus, K., Carlson, D., Jadoul, Y., Lowry, A., Gross, S., Garcia, M., Salazar-Casals, A., Rubio-García, A., Haas, C. E., De Boer, B., & Ravignani, A. (2020). Relationships between vocal ontogeny and vocal tract anatomy in harbour seals (Phoca vitulina). In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (
Eds. ), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 63-66). Nijmegen: The Evolution of Language Conferences. -
Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 1783-1794. doi:10.3758/s13428-020-01350-2.
Abstract
In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG’s output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG’s output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.Additional information
data and materials -
Robinson, S. (2002). Constituent order in Tenejapa Tzeltal. International Journal of American Linguistics, 68(1), 51-81.
Abstract
Examines the basic constituent order (BCO) on the transitive sentences in the Tenejapa dialect of Tzeltal. Concept of the basic word order; Patterns of constituent order of the transitive clauses; Consideration of VOA as the BCO. -
Rodd, J., Bosker, H. R., Ernestus, M., Alday, P. M., Meyer, A. S., & Ten Bosch, L. (2020). Control of speaking rate is achieved by switching between qualitatively distinct cognitive ‘gaits’: Evidence from simulation. Psychological Review, 127(2), 281-304. doi:10.1037/rev0000172.
Abstract
That speakers can vary their speaking rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: When walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech “gaits” that resemble walking and running? Or is control achieved by continuous modulation of a single gait? This study investigates these possibilities through simulations of a new connectionist computational model of the cognitive process of speech production, EPONA, that borrows from Dell, Burger, and Svec’s (1997) model. The model has parameters that can be adjusted to fit the temporal characteristics of speech at different speaking rates. We trained the model on a corpus of disyllabic Dutch words produced at different speaking rates. During training, different clusters of parameter values (regimes) were identified for different speaking rates. In a 1-gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speaking rates. The parameter settings associated with each speaking rate were not linearly related, suggesting the presence of cognitive gaits. Thus, we provide the first computationally explicit account of the ability to modulate the speech production system to achieve different speaking styles.Additional information
Supplemental material -
Rodd, J. (2020). How speaking fast is like running: Modelling control of speaking rate. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Roelofs, A. (2002). Syllable structure effects turn out to be word length effects: Comment on Santiago et al. (2000). Language and Cognitive Processes, 17(1), 1-13. doi:10.1080/01690960042000139.
Abstract
Santiago, MacKay, Palma, and Rho (2000) report two picture naming experiments examining the role of syllable onset complexity and number of syllables in spoken word production. Experiment 1 showed that naming latencies are longer for words with two syllables (e.g., demon ) than one syllable (e.g., duck ), and longer for words beginning with a consonant cluster (e.g., drill ) than a single consonant (e.g., duck ). Experiment 2 replicated these findings and showed that the complexity of the syllable nucleus and coda has no effect. These results are taken to support MacKay's (1987) Node Structure theory and to refute models such as WEAVER++ (Roelofs, 1997a) that predict effects of word length but not of onset complexity and number of syllables per se. In this comment, I show that a re-analysis of the data of Santiago et al. that takes word length into account leads to the opposite conclusion. The observed effects of onset complexity and number of syllables appear to be length effects, supporting WEAVER++ and contradicting the Node Structure theory. -
Roelofs, A. (2002). Spoken language planning and the initiation of articulation. Quarterly Journal of Experimental Psychology, 55A(2), 465-483. doi:10.1080/02724980143000488.
Abstract
Minimalist theories of spoken language planning hold that articulation starts when the first
speech segment has been planned, whereas non-minimalist theories assume larger units (e.g.,
Levelt, Roelofs, & Meyer, 1999a). Three experiments are reported, which were designed to distinguish
between these views using a newhybrid task that factorially manipulated preparation and
auditory priming of spoken language production. Minimalist theories predict no effect from
priming of non-initial segments when the initial segment of an utterance is already prepared;
observing such a priming effect would support non-minimalist theories. In all three experiments,
preparation and priming yielded main effects, and together their effects were additive. Preparation
of initial segments does not eliminate priming effects for later segments. These results challenge
the minimalist view. The findings are simulated by WEAVER++ (Roelofs, 1997b), which
employs the phonological word as the lower limit for articulation initiation. -
Roelofs, A. (2002). Storage and computation in spoken word production. In S. Nooteboom, F. Weerman, & F. Wijnen (
Eds. ), Storage and computation in the language faculty (pp. 183-216). Dordrecht: Kluwer. -
Roelofs, A., & Hagoort, P. (2002). Control of language use: Cognitive modeling of the hemodynamics of Stroop task performance. Cognitive Brain Research, 15(1), 85-97. doi:10.1016/S0926-6410(02)00218-5.
Abstract
The control of language use has in its simplest form perhaps been most intensively studied using the color–word Stroop task. The authors review chronometric and neuroimaging evidence on Stroop task performance to evaluate two prominent, implemented models of control in naming and reading: GRAIN and WEAVER++. Computer simulations are reported, which reveal that WEAVER++ offers a more satisfactory account of the data than GRAIN. In particular, we report WEAVER++ simulations of the BOLD response in anterior cingulate cortex during Stroop performance. Aspects of single-word production and perception in the Stroop task are discussed in relation to the wider problem of the control of language use. -
Roelofs, A. (2002). How do bilinguals control their use of languages? Bilingualism: Language and Cognition, 5(3), 214-215. doi:10.1017/S1366728902263014.
-
Roelofs, A. (2002). Modeling of lexical access in speech production: A psycholinguistic perspective on the lexicon. In L. Behrens, & D. Zaefferer (
Eds. ), The lexicon in focus: Competition and convergence in current lexicology (pp. 75-92). Frankfurt am Main: Lang. -
Roelofs, A., & Baayen, R. H. (2002). Morphology by itself in planning the production of spoken words. Psychonomic Bulletin & Review, 9(1), 132-138.
Abstract
The authors report a study in Dutch that used an on-line preparation paradigm to test the issue of semantic
dependency versus morphological autonomy in the production of polymorphemic words. Semantically
transparent complex words (like input in English) and semantically opaque complex words
(like invoice) showed clear evidence of morphological structure in word-form encoding, since both exhibited
an equally large preparation effect that was much greater than that for morphologically simple
words (like insect). These results suggest that morphemes may be planning units in the production of
complex words, without making a semantic contribution, thereby supporting the autonomy view. Language
production establishes itself as a domain in which morphology may operate “by itself” (Aronoff,
1994) without recourse to meaning. -
Rojas-Berscia, L. M., Napurí, A., & Wang, L. (2020). Shawi (Chayahuita). Journal of the International Phonetic Association, 50(3), 417-430. doi:10.1017/S0025100318000415.
Abstract
Shawi1 is the language of the indigenous Shawi/Chayahuita people in Northwestern Amazonia, Peru. It belongs to the Kawapanan language family, together with its moribund sister language, Shiwilu. It is spoken by about 21,000 speakers (see Rojas-Berscia 2013) in the provinces of Alto Amazonas and Datem del Marañón in the region of Loreto and in the northern part of the region of San Martín, being one of the most vital languages in the country (see Figure 1).2 Although Shawi groups in the Upper Amazon were contacted by Jesuit missionaries during colonial times, the maintenance of their customs and language is striking. To date, most Shawi children are monolingual and have their first contact with Spanish at school. Yet, due to globalisation and the construction of highways by the Peruvian government, many Shawi villages are progressively westernising. This may result in the imminent loss of their indigenous culture and language.Additional information
Supplementary material -
Roos, N. M., & Piai, V. (2020). Across‐session consistency of context‐driven language processing: A magnetoencephalography study. European Journal of Neuroscience, 52, 3457-3469. doi:10.1111/ejn.14785.
Abstract
Changes in brain organization following damage are commonly observed, but they remain poorly understood. These changes are often studied with imaging techniques that overlook the temporal granularity at which language processes occur. By contrast, electrophysiological measures provide excellent temporal resolution. To test the suitability of magnetoencephalography (MEG) to track language-related neuroplasticity, the present study aimed at establishing the spectro-temporo-spatial across-session consistency of context-driven picture naming in healthy individuals, using MEG in two test–retest sessions. Spectro-temporo-spatial test–retest consistency in a healthy population is a prerequisite for studying neuronal changes in clinical populations over time. For this purpose, 15 healthy speakers were tested with MEG while performing a context-driven picture-naming task at two time points. Participants read a sentence missing the final word and named a picture completing the sentence. Sentences were constrained or unconstrained toward the picture, such that participants could either retrieve the picture name through sentence context (constrained sentences), or could only name it after the picture appeared (unconstrained sentences). The context effect (constrained versus unconstrained) in picture-naming times had a strong effect size and high across-session consistency. The context MEG results revealed alpha–beta power decreases (10–20 Hz) in the left temporal and inferior parietal lobule that were consistent across both sessions. As robust spectro-temporo-spatial findings in a healthy population are required for working toward longitudinal patient studies, we conclude that using context-driven language production and MEG is a suitable way to examine language-related neuroplasticity after brain damage. -
Rossi, G. (2020). Other-repetition in conversation across languages: Bringing prosody into pragmatic typology. Language in Society, 49(4), 495-520. doi:10.1017/S0047404520000251.
Abstract
In this article, I introduce the aims and scope of a project examining other-repetition in natural conversation. This introduction provides the conceptual and methodological background for the five language-specific studies contained in this special issue, focussing on other-repetition in English, Finnish, French, Italian, and Swedish. Other-repetition is a recurrent conversational phenomenon in which a speaker repeats all or part of what another speaker has just said, typically in the next turn. Our project focusses particularly on other-repetitions that problematise what is being repeated and typically solicit a response. Previous research has shown that such repetitions can accomplish a range of conversational actions. But how do speakers of different languages distinguish these actions? In addressing this question, we put at centre stage the resources of prosody—the nonlexical acoustic-auditory features of speech—and bring its systematic analysis into the growing field of pragmatic typology—the comparative study of language use and conversational structure. -
Rossi, G. (2020). The prosody of other-repetition in Italian: A system of tunes. Language in Society, 49(4), 619-652. doi:10.1017/S0047404520000627.
Abstract
As part of the project reported on in this special issue, the present study provides an overview of the types of action accomplished by other-repetition in Italian, with particular reference to the variety of the language spoken in the northeastern province of Trento. The analysis surveys actions within the domain of initiating repair, actions that extend beyond initiating repair, and actions that are alternative to initiating repair. Pitch contour emerges as a central design feature of other-repetition in Italian, with six nuclear contours associated with distinct types of action, sequential trajectories, and response patterns. The study also documents the interplay of pitch contour with other prosodic features (pitch span and register) and visible behavior (head nods, eyebrow movements).Additional information
Sound clips.zip -
Rowland, C. F., Theakston, A. L., Ambridge, B., & Twomey, K. E. (
Eds. ). (2020). Current Perspectives on Child Language Acquisition: How children use their environment to learn. Amsterdam: John Benjamins. doi:10.1075/tilar.27.Abstract
In recent years the field has seen an increasing realisation that the full complexity of language acquisition demands theories that (a) explain how children integrate information from multiple sources in the environment, (b) build linguistic representations at a number of different levels, and (c) learn how to combine these representations in order to communicate effectively. These new findings have stimulated new theoretical perspectives that are more centered on explaining learning as a complex dynamic interaction between the child and her environment. This book is the first attempt to bring some of these new perspectives together in one place. It is a collection of essays written by a group of researchers who all take an approach centered on child-environment interaction, and all of whom have been influenced by the work of Elena Lieven, to whom this collection is dedicated. -
Rowland, C. F. (2020). Introduction. In M. E. Poulsen (
Ed. ), The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics. -
Rubio-Fernández, P., & Jara-Ettinger, J. (2020). Incrementality and efficiency shape pragmatics across languages. Proceedings of the National Academy of Sciences, 117, 13399-13404. doi:10.1073/pnas.1922067117.
Abstract
To correctly interpret a message, people must attend to the context in which it was produced. Here we investigate how this process, known as pragmatic reasoning, is guided by two universal forces in human communication: incrementality and efficiency, with speakers of all languages interpreting language incrementally and making the most efficient use of the incoming information. Crucially, however, the interplay between these two forces results in speakers of different languages having different pragmatic information available at each point in processing, including inferences about speaker intentions. In particular, the position of adjectives relative to nouns (e.g., “black lamp” vs. “lamp black”) makes visual context information available in reverse orders. In an eye-tracking study comparing four unrelated languages that have been understudied with regard to language processing (Catalan, Hindi, Hungarian, and Wolof), we show that speakers of languages with an adjective–noun order integrate context by first identifying properties (e.g., color, material, or size), whereas speakers of languages with a noun–adjective order integrate context by first identifying kinds (e.g., lamps or chairs). Most notably, this difference allows listeners of adjective–noun descriptions to infer the speaker’s intention when using an adjective (e.g., “the black…” as implying “not the blue one”) and anticipate the target referent, whereas listeners of noun–adjective descriptions are subject to temporary ambiguity when deriving the same interpretation. We conclude that incrementality and efficiency guide pragmatic reasoning across languages, with different word orders having different pragmatic affordances. -
Saito, H., & Kita, S. (2002). "Jesuchaa, kooi, imi" no hennshuu ni atat te [On the occasion of editing "Jesuchaa, Kooi, imi"]. In H. Saito, & S. Kita (
Eds. ), Kooi, jesuchaa, imi [Action, gesture, meaning] (pp. v-xi). Tokyo: Kyooritsu Shuppan. -
Saito, H., & Kita, S. (
Eds. ). (2002). Jesuchaa, kooi, imi [Gesture, action, meaning]. Tokyo: Kyooritsu Shuppan. -
Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2002). A Bayesian attractor network with incremental learning. Network: Computation in Neural Systems, 13(2), 179-194. doi:10.1088/0954-898X/13/2/302.
Abstract
A realtime online learning system with capacity limits needs to gradually forget old information in order to avoid catastrophic forgetting. This can be achieved by allowing new information to overwrite old, as in a so-called palimpsest memory. This paper describes an incremental learning rule based on the Bayesian confidence propagation neural network that has palimpsest properties when employed in an attractor neural network. The network does not suffer from catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits faster convergence for newer patterns. -
Scharenborg, O., Ondel, L., Palaskar, S., Arthur, P., Ciannella, F., Du, M., Larsen, E., Merkx, D., Riad, R., Wang, L., Dupoux, E., Besacier, L., Black, A., Hasegawa-Johnson, M., Metze, F., Neubig, G., Stüker, S., Godard, P., & Müller, M. (2020). Speech technology for unwritten languages. IEEE/ACM Transactions on Audio, Speech and Language Processing, 28, 964-975. doi:10.1109/TASLP.2020.2973896.
Abstract
Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible. -
Scharenborg, O., Boves, L., & de Veth, J. (2002). ASR in a human word recognition model: Generating phonemic input for Shortlist. In J. H. L. Hansen, & B. Pellom (
Eds. ), ICSLP 2002 - INTERSPEECH 2002 - 7th International Conference on Spoken Language Processing (pp. 633-636). ISCA Archive.Abstract
The current version of the psycholinguistic model of human word recognition Shortlist suffers from two unrealistic constraints. First, the input of Shortlist must consist of a single string of phoneme symbols. Second, the current version of the search in Shortlist makes it difficult to deal with insertions and deletions in the input phoneme string. This research attempts to fully automatically derive a phoneme string from the acoustic signal that is as close as possible to the number of phonemes in the lexical representation of the word. We optimised an Automatic Phone Recogniser (APR) using two approaches, viz. varying the value of the mismatch parameter and optimising the APR output strings on the output of Shortlist. The approaches show that it will be very difficult to satisfy the input requirements of the present version of Shortlist with a phoneme string generated by an APR. -
Scharenborg, O., & Boves, L. (2002). Pronunciation variation modelling in a model of human word recognition. In Pronunciation Modeling and Lexicon Adaptation for Spoken Language Technology [PMLA-2002] (pp. 65-70).
Abstract
Due to pronunciation variation, many insertions and deletions of phones occur in spontaneous speech. The psycholinguistic model of human speech recognition Shortlist is not well able to deal with phone insertions and deletions and is therefore not well suited for dealing with real-life input. The research presented in this paper explains how Shortlist can benefit from pronunciation variation modelling in dealing with real-life input. Pronunciation variation was modelled by including variants into the lexicon of Shortlist. A series of experiments was carried out to find the optimal acoustic model set for transcribing the training material that was used as basis for the generation of the variants. The Shortlist experiments clearly showed that Shortlist benefits from pronunciation variation modelling. However, the performance of Shortlist stays far behind the performance of other, more conventional speech recognisers. -
Schijven, D., Stevelink, R., McCormack, M., van Rheenen, W., Luykx, J. J., Koeleman, B. P., Veldink, J. H., Project MinE ALS GWAS Consortium, & International League Against Epilepsy Consortium on Complex Epilepsies (2020). Analysis of shared common genetic risk between amyotrophic lateral sclerosis and epilepsy. Neurobiology of Aging, 92, 153.e1-153.e5. doi:10.1016/j.neurobiolaging.2020.04.011.
Abstract
Because hyper-excitability has been shown to be a shared pathophysiological mechanism, we used the latest and largest genome-wide studies in amyotrophic lateral sclerosis (n = 36,052) and epilepsy (n = 38,349) to determine genetic overlap between these conditions. First, we showed no significant genetic correlation, also when binned on minor allele frequency. Second, we confirmed the absence of polygenic overlap using genomic risk score analysis. Finally, we did not identify pleiotropic variants in meta-analyses of the 2 diseases. Our findings indicate that amyotrophic lateral sclerosis and epilepsy do not share common genetic risk, showing that hyper-excitability in both disorders has distinct origins.Additional information
1-s2.0-S0197458020301305-mmc1.docx -
Schijven, D., Veldink, J. H., & Luykx, J. J. (2020). Genetic cross-disorder analysis in psychiatry: from methodology to clinical utility. The British Journal of Psychiatry, 216(5), 246-249. doi:10.1192/bjp.2019.72.
Abstract
SummaryGenome-wide association studies have uncovered hundreds of loci associated with psychiatric disorders. Cross-disorder studies are among the prime ramifications of such research. Here, we discuss the methodology of the most widespread methods and their clinical utility with regard to diagnosis, prediction, disease aetiology and treatment in psychiatry.Declaration of interestNone. -
Schijven, D., Zinkstok, J. R., & Luykx, J. J. (2020). Van genetische bevindingen naar de klinische praktijk van de psychiater: Hoe genetica precisiepsychiatrie mogelijk kan maken. Tijdschrift voor Psychiatrie, 62(9), 776-783.
Files private
Request files -
Schiller, N. O., Costa, A., & Colomé, A. (2002). Phonological encoding of single words: In search of the lost syllable. In C. Gussenhoven, & N. Warner (
Eds. ), Laboratory Phonology VII (pp. 35-59). Berlin: Mouton de Gruyter. -
Schiller, N. O., & Caramazza, A. (2002). The selection of grammatical features in word production: The case of plural nouns in German. Brain and Language, 81(1-3), 342-357. doi:10.1006/brln.2001.2529.
Abstract
Two experiments investigate the effect of number congruency using picture–word interference. Native German participants were required to name pictures of single objects (Nase ‘nose’) or two instances of the same object (Nasen ‘noses’) while ignoring simultaneously presented distractor words. Distractor words either had the same number or were different in number. In addition, the type of plural formation (same or different inflectional plural suffix) and the semantic relationship (same or different semantic category) between target and distractor were varied in Experiments 1 and 2. Results showed no effect of number congruency in either experiment. Furthermore, the type of inflectional suffix did not exert an influence on naming latencies in Experiment 1, but semantic relationship led to a significant interference effect in Experiment 2. The results indicate that selection of the number feature diacritic in noun production is not a competitive process. The implications of the results for models of lexical access are discussed. -
Schiller, N. O., Schmitt, B., Peters, J., & Levelt, W. J. M. (2002). 'BAnana'or 'baNAna'? Metrical encoding during speech production [Abstract]. In M. Baumann, A. Keinath, & J. Krems (
Eds. ), Experimentelle Psychologie: Abstracts der 44. Tagung experimentell arbeitender Psychologen. (pp. 195). TU Chemnitz, Philosophische Fakultät.Abstract
The time course of metrical encoding, i.e. stress, during speech production is investigated. In a first experiment, participants were presented with pictures whose bisyllabic Dutch names had initial or final stress (KAno 'canoe' vs. kaNON 'cannon'; capital letters indicate stressed syllables). Picture names were matched for frequency and object recognition latencies. When participants were asked to judge whether picture names had stress on the first or second syllable, they showed significantly faster decision times for initially stressed targets than for targets with final stress. Experiment 2 replicated this effect with trisyllabic picture names (faster RTs for penultimate stress than for ultimate stress). In our view, these results reflect the incremental phonological encoding process. Wheeldon and Levelt (1995) found that segmental encoding is a process running from the beginning to the end of words. Here, we present evidence that the metrical pattern of words, i.e. stress, is also encoded incrementally. -
Schiller, N. O. (2002). From phonetics to cognitive psychology: Psycholinguistics has it all. In A. Braun, & H. Masthoff (
Eds. ), Phonetics and its Applications. Festschrift for Jens-Peter Köster on the Occasion of his 60th Birthday. [Beihefte zur Zeitschrift für Dialektologie und Linguistik; 121] (pp. 13-24). Stuttgart: Franz Steiner Verlag. -
Schmiedtová, V., & Schmiedtová, B. (2002). The color spectrum in language: The case of Czech: Cognitive concepts, new idioms and lexical meanings. In H. Gottlieb, J. Mogensen, & A. Zettersten (
Eds. ), Proceedings of The 10th International Symposium on Lexicography (pp. 285-292). Tübingen: Max Niemeyer Verlag.Abstract
The representative corpus SYN2000 in the Czech National Corpus (CNK) project containing 100 million word forms taken from different types of texts. I have tried to determine the extent and depth of the linguistic material in the corpus. First, I chose the adjectives indicating the basic colors of the spectrum and other parts of speech (names and adverbs) derived from these adjectives. An analysis of three examples - black, white and red - shows the extent of the linguistic wealth and diversity we are looking at: because of size limitations, no existing dictionary is capable of embracing all analyzed nuances. Currently, we can only hope that the next dictionary of contemporary Czech, built on the basis of the Czech National Corpus, will be electronic. Without the size limitations, we would be able us to include many of the fine nuances of language -
Schoenmakers, G.-J. (2020). Freedom in the Dutch middle-field: Deriving discourse structure at the syntax-pragmatics interface. Glossa: a journal of general linguistics, 5(1): 114. doi:10.5334/gjgl.1307.
Abstract
This paper experimentally explores the optionality of Dutch scrambling structures with a definite object and an adverb. Most researchers argue that such structures are not freely interchangeable, but are subject to a strict discourse template. Existing analyses are based primarily on intuitions of the researchers, while experimental support is scarce. This paper reports on two experiments to gauge the existence of a strict discourse template. The discourse status of definite objects in scrambling clauses is first probed in a fill-in-the-blanks experiment and subsequently manipulated in a speeded judgment experiment. The results of these experiments indicate that scrambling is not as restricted as is commonly claimed. Although mismatches between surface order and pragmatic interpretation lead to a penalty in judgment rates and a rise in reaction times, they nonetheless occur in production and yield fully acceptable structures. Crucially, the penalties and delays emerge only in scrambling clauses with an adverb that is sensitive to focus placement. This paper argues that scrambling does not map onto discourse structure in the strict way proposed in most literature. Instead, a more complex syntax of deriving discourse relations is proposed which submits that the Dutch scrambling pattern results from two familiar processes which apply at the syntax-pragmatics interface: reconstruction and covert raising. -
Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (2002). Exploring the time course of lexical access in language production: Picture word interference studies. In G. Altmann (
Ed. ), Psycholinguistics: Critical Concepts in Psychology [vol. 5] (pp. 168-191). London: Routledge. -
Seidlmayer, E., Voß, J., Melnychuk, T., Galke, L., Tochtermann, K., Schultz, C., & Förstner, K. U. (2020). ORCID for Wikidata. Data enrichment for scientometric applications. In L.-A. Kaffee, O. Tifrea-Marciuska, E. Simperl, & D. Vrandečić (
Eds. ), Proceedings of the 1st Wikidata Workshop (Wikidata 2020). Aachen, Germany: CEUR Workshop Proceedings.Abstract
Due to its numerous bibliometric entries of scholarly articles and connected information Wikidata can serve as an open and rich
source for deep scientometrical analyses. However, there are currently certain limitations: While 31.5% of all Wikidata entries represent scientific articles, only 8.9% are entries describing a person and the number
of entries researcher is accordingly even lower. Another issue is the frequent absence of established relations between the scholarly article item and the author item although the author is already listed in Wikidata.
To fill this gap and to improve the content of Wikidata in general, we established a workflow for matching authors and scholarly publications by integrating data from the ORCID (Open Researcher and Contributor ID) database. By this approach we were able to extend Wikidata by more than 12k author-publication relations and the method can be
transferred to other enrichments based on ORCID data. This is extension is beneficial for Wikidata users performing bibliometrical analyses or using such metadata for other purposes. -
Seifart, F. (2002). El sistema de clasificación nominal del miraña. Bogotá: CCELA/Universidad de los Andes.
-
Seifart, F. (2002). Shape-distinctions picture-object matching task, with 2002 supplement. In S. Kita (
Ed. ), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 15-17). Nijmegen: Max Planck Institute for Psycholinguistics. -
Seijdel, N., Tsakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2020). Depth in convolutional neural networks solves scene segmentation. PLOS Computational Biology, 16: e1008022. doi:10.1371/journal.pcbi.1008022.
Abstract
Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network. -
Seijdel, N., Jahfari, S., Groen, I. I. A., & Scholte, H. S. (2020). Low-level image statistics in natural scenes influence perceptual decision-making. Scientific Reports, 10: 10573. doi:10.1038/s41598-020-67661-8.
Abstract
A fundamental component of interacting with our environment is gathering and interpretation of sensory information. When investigating how perceptual information influences decision-making, most researchers have relied on manipulated or unnatural information as perceptual input, resulting in findings that may not generalize to real-world scenes. Unlike simplified, artificial stimuli, real-world scenes contain low-level regularities that are informative about the structural complexity, which the brain could exploit. In this study, participants performed an animal detection task on low, medium or high complexity scenes as determined by two biologically plausible natural scene statistics, contrast energy (CE) or spatial coherence (SC). In experiment 1, stimuli were sampled such that CE and SC both influenced scene complexity. Diffusion modelling showed that the speed of information processing was affected by low-level scene complexity. Experiment 2a/b refined these observations by showing how isolated manipulation of SC resulted in weaker but comparable effects, with an additional change in response boundary, whereas manipulation of only CE had no effect. Overall, performance was best for scenes with intermediate complexity. Our systematic definition quantifies how natural scene complexity interacts with decision-making. We speculate that CE and SC serve as an indication to adjust perceptual decision-making based on the complexity of the input. -
Sekine, K., Schoechl, C., Mulder, K., Holler, J., Kelly, S., Furman, R., & Ozyurek, A. (2020). Evidence for children's online integration of simultaneous information from speech and iconic gestures: An ERP study. Language, Cognition and Neuroscience, 35(10), 1283-1294. doi:10.1080/23273798.2020.1737719.
Abstract
Children perceive iconic gestures, along with speech they hear. Previous studies have shown
that children integrate information from both modalities. Yet it is not known whether children
can integrate both types of information simultaneously as soon as they are available as adults
do or processes them separately initially and integrate them later. Using electrophysiological
measures, we examined the online neurocognitive processing of gesture-speech integration in
6- to 7-year-old children. We focused on the N400 event-related potentials component which
is modulated by semantic integration load. Children watched video clips of matching or
mismatching gesture-speech combinations, which varied the semantic integration load. The
ERPs showed that the amplitude of the N400 was larger in the mismatching condition than in
the matching condition. This finding provides the first neural evidence that by the ages of 6
or 7, children integrate multimodal semantic information in an online fashion comparable to
that of adults. -
Senft, G. (2002). What should the ideal online-archive documenting linguistic data of various (endangered) languages and cultures offer to interested parties? Some ideas of a technically naive linguistic field researcher and potential user. In P. Austin, H. Dry, & P. Wittenburg (
Eds. ), Proceedings of the international LREC workshop on resources and tools in field linguistics (pp. 11-15). Paris: European Language Resources Association. -
Senft, G. (2020). “.. to grasp the native's point of view..” — A plea for a holistic documentation of the Trobriand Islanders' language, culture and cognition. Russian Journal of Linguistics, 24(1), 7-30. doi:10.22363/2687-0088-2020-24-1-7-30.
Abstract
In his famous introduction to his monograph “Argonauts of the Western Pacific” Bronislaw
Malinowski (1922: 24f.) points out that a “collection of ethnographic statements, characteristic
narratives, typical utterances, items of folk-lore and magical formulae has to be given as a corpus
inscriptionum, as documents of native mentality”. This is one of the prerequisites to “grasp the
native's point of view, his relation to life, to realize his vision of his world”. Malinowski managed
to document a “Corpus Inscriptionum Agriculturae Quriviniensis” in his second volume of “Coral
Gardens and their Magic” (1935 Vol II: 79-342). But he himself did not manage to come up with a
holistic corpus inscriptionum for the Trobriand Islanders. One of the main aims I have been pursuing
in my research on the Trobriand Islanders' language, culture, and cognition has been to fill this
ethnolinguistic niche. In this essay, I report what I had to do to carry out this complex and ambitious
project, what forms and kinds of linguistic and cultural competence I had to acquire, and how I
planned my data collection during 16 long- and short-term field trips to the Trobriand Islands
between 1982 and 2012. The paper ends with a critical assessment of my Trobriand endeavor. -
Senft, G. (2020). Kampfschild - vayola. In T. Brüderlin, S. Schien, & S. Stoll (
Eds. ), Ausgepackt! 125Jahre Geschichte[n] im Museum Natur und Mensch (pp. 58-59). Freiburg: Michael Imhof Verlag.Additional information
Picture -
Senft, G. (2020). 32 Kampfschild - dance or war shield - vayola. In T. Brüderlin, & S. Stoll (
Eds. ), Ausgepackt! 125Jahre Geschichte[n] im Museum Natur und Mensch. Texte zur Ausstellung, Städtische Museen Freiburg, vom 20. Juni 2020 bis 10. Januar 2021 (pp. 76-77). Freiburg: Städtische Museen. -
Senft, G. (2002). Feldforschung in einer deutschen Fabrik - oder: Trobriand ist überall. In H. Fischer (
Ed. ), Feldforschungen. Erfahrungsberichte zur Einführung (Neufassung) (pp. 207-226). Berlin: Reimer. -
Senft, G. (2002). Aus dem Arbeitsalltag von Gunter Senft, MPI Nijmegen. Rundbrief - Forum für Mitglieder und Freunde des Pazifik-Netzwerkes e.V., 51(2), 24-26.
-
Senft, G. (2002). [Review of the book Die Deutsche Südsee 1884-1914. Ein Handbuch ed. by Hermann Joseph Hiery]. Paideuma, 48, 299-303.
-
Senft, G. (2002). Linguistische Feldforschung. In H. M. Müller (
Ed. ), Arbeitsbuch Linguistik (pp. 353-363). Paderborn: Schöningh UTB. -
Seuren, P. A. M. (2002). Pseudoarguments and pseudocomplements. In B. Nevin (
Ed. ), The legacy of Zellig Harris: Language and information into the 21st Century: 1 Philosophy of Science, Syntax, and Semantics (pp. 179-206). Amsterdam: John Benjamins. -
Seuren, P. A. M. (2002). The logic of thinking. Koninklijke Nederlandse Akademie van Wetenschappen, Mededelingen van de Afdeling Letterkunde, Nieuwe Reeks, 65(9), 5-35.
-
Seuren, P. A. M. (2002). Clitic clusters in French and Italian. In H. Jacobs, & L. Wetzels (
Eds. ), Liber Amicorum Bernard Bichakjian (pp. 217-233). Maastricht: Shaker. -
Seuren, P. A. M. (2002). Existential import. In D. De Jongh, M. Nilsenová, & H. Zeevat (
Eds. ), Proceedings of The 3rd and 4th International Symposium on Language, Logic and Computation. Amsterdam: ILLC Scientific Publ. U. of Amsterdam. -
Seuren, P. A. M. (2002). [Review of the book Indigenous Grammar Across Cultures ed. by Hannes Kniffka]. Linguistics, 40(5), 1090-1096.
-
Shao, Z., & Rommers, J. (2020). How a question context aids word production: Evidence from the picture–word interference paradigm. Quarterly Journal of Experimental Psychology, 73(2), 165-173. doi:10.1177/1747021819882911.
Abstract
Difficulties in saying the right word at the right time arise at least in part because multiple response candidates are simultaneously activated in the speaker’s mind. The word selection process has been simulated using the picture–word interference task, in which participants name pictures while ignoring a superimposed written distractor word. However, words are usually produced in context, in the service of achieving a communicative goal. Two experiments addressed the questions whether context influences word production, and if so, how. We embedded the picture–word interference task in a dialogue-like setting, in which participants heard a question and named a picture as an answer to the question while ignoring a superimposed distractor word. The conversational context was either constraining or nonconstraining towards the answer. Manipulating the relationship between the picture name and the distractor, we focused on two core processes of word production: retrieval of semantic representations (Experiment 1) and phonological encoding (Experiment 2). The results of both experiments showed that naming reaction times (RTs) were shorter when preceded by constraining contexts as compared with nonconstraining contexts. Critically, constraining contexts decreased the effect of semantically related distractors but not the effect of phonologically related distractors. This suggests that conversational contexts can help speakers with aspects of the meaning of to-be-produced words, but phonological encoding processes still need to be performed as usual. -
Sharoh, D. (2020). Advances in layer specific fMRI for the study of language, cognition and directed brain networks. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Sharpe, V., Weber, K., & Kuperberg, G. R. (2020). Impairments in probabilistic prediction and Bayesian learning can explain reduced neural semantic priming in schizophrenia. Schizophrenia Bulletin, 46(6), 1558-1566. doi:10.1093/schbul/sbaa069.
Abstract
It has been proposed that abnormalities in probabilistic prediction and dynamic belief updating explain the multiple features of schizophrenia. Here, we used electroencephalography (EEG) to ask whether these abnormalities can account for the well-established reduction in semantic priming observed in schizophrenia under nonautomatic conditions. We isolated predictive contributions to the neural semantic priming effect by manipulating the prime’s predictive validity and minimizing retroactive semantic matching mechanisms. We additionally examined the link between prediction and learning using a Bayesian model that probed dynamic belief updating as participants adapted to the increase in predictive validity. We found that patients were less likely than healthy controls to use the prime to predictively facilitate semantic processing on the target, resulting in a reduced N400 effect. Moreover, the trial-by-trial output of our Bayesian computational model explained between-group differences in trial-by-trial N400 amplitudes as participants transitioned from conditions of lower to higher predictive validity. These findings suggest that, compared with healthy controls, people with schizophrenia are less able to mobilize predictive mechanisms to facilitate processing at the earliest stages of accessing the meanings of incoming words. This deficit may be linked to a failure to adapt to changes in the broader environment. This reciprocal relationship between impairments in probabilistic prediction and Bayesian learning/adaptation may drive a vicious cycle that maintains cognitive disturbances in schizophrenia.Additional information
supplementary material -
Shen, C., & Janse, E. (2020). Maximum speech performance and executive control in young adult speakers. Journal of Speech, Language, and Hearing Research, 63, 3611-3627. doi:10.1044/2020_JSLHR-19-00257.
Abstract
Purpose
This study investigated whether maximum speech performance, more specifically, the ability to rapidly alternate between similar syllables during speech production, is associated with executive control abilities in a nonclinical young adult population.
Method
Seventy-eight young adult participants completed two speech tasks, both operationalized as maximum performance tasks, to index their articulatory control: a diadochokinetic (DDK) task with nonword and real-word syllable sequences and a tongue-twister task. Additionally, participants completed three cognitive tasks, each covering one element of executive control (a Flanker interference task to index inhibitory control, a letter–number switching task to index cognitive switching, and an operation span task to index updating of working memory). Linear mixed-effects models were fitted to investigate how well maximum speech performance measures can be predicted by elements of executive control.
Results
Participants' cognitive switching ability was associated with their accuracy in both the DDK and tongue-twister speech tasks. Additionally, nonword DDK accuracy was more strongly associated with executive control than real-word DDK accuracy (which has to be interpreted with caution). None of the executive control abilities related to the maximum rates at which participants performed the two speech tasks.
Conclusion
These results underscore the association between maximum speech performance and executive control (cognitive switching in particular). -
Shin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C. and 49 moreShin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C., Li, S., Yang, Q., Luciano, M., Karama, S., Lewis, L., Bastin, M. E., Harris, M. A., Wardlaw, J. M., Deary, I. E., Scholz, M., Loeffler, M., Witte, A. V., Beyer, F., Villringer, A., Armstrong, N. F., Mather, K. A., Ames, D., Jiang, J., Kwok, J. B., Schofield, P. R., Thalamuthu, A., Trollor, J. N., Wright, M. J., Brodaty, H., Wen, W., Sachdev, P. S., Terzikhan, N., Evans, T. E., Adams, H. H. H. H., Ikram, M. A., Frenzel, S., Van der Auwera-Palitschka, S., Wittfeld, K., Bülow, R., Grabe, H. J., Tzourio, C., Mishra, A., Maingault, S., Debette, S., Gillespie, N. A., Franz, C. E., Kremen, W. S., Ding, L., Jahanshad, N., the ENIGMA Consortium, Sestan, N., Pausova, Z., Seshadri, S., Paus, T., & the neuroCHARGE Working Group (2020). Global and regional development of the human cerebral cortex: Molecular acrchitecture and occupational aptitudes. Cerebral Cortex, 30(7), 4121-4139. doi:10.1093/cercor/bhaa035.
Abstract
We have carried out meta-analyses of genome-wide association studies (GWAS) (n = 23 784) of the first two principal components (PCs) that group together cortical regions with shared variance in their surface area. PC1 (global) captured variations of most regions, whereas PC2 (visual) was specific to the primary and secondary visual cortices. We identified a total of 18 (PC1) and 17 (PC2) independent loci, which were replicated in another 25 746 individuals. The loci of the global PC1 included those associated previously with intracranial volume and/or general cognitive function, such as MAPT and IGF2BP1. The loci of the visual PC2 included DAAM1, a key player in the planar-cell-polarity pathway. We then tested associations with occupational aptitudes and, as predicted, found that the global PC1 was associated with General Learning Ability, and the visual PC2 was associated with the Form Perception aptitude. These results suggest that interindividual variations in global and regional development of the human cerebral cortex (and its molecular architecture) cascade—albeit in a very limited manner—to behaviors as complex as the choice of one’s occupation. -
Sjerps, M. J., Decuyper, C., & Meyer, A. S. (2020). Initiation of utterance planning in response to pre-recorded and “live” utterances. Quarterly Journal of Experimental Psychology, 73(3), 357-374. doi:10.1177/1747021819881265.
Abstract
In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants’ timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants’ speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings. -
Slobin, D. I. (2002). Cognitive and communicative consequences of linguistic diversity. In S. Strömqvist (
Ed. ), The diversity of languages and language learning (pp. 7-23). Lund, Sweden: Lund University, Centre for Languages and Literature. -
Slonimska, A., Ozyurek, A., & Capirci, O. (2020). The role of iconicity and simultaneity for efficient communication: The case of Italian Sign Language (LIS). Cognition, 200: 104246. doi:10.1016/j.cognition.2020.104246.
Abstract
A fundamental assumption about language is that, regardless of language modality, it faces the linearization problem, i.e., an event that occurs simultaneously in the world has to be split in language to be organized on a temporal scale. However, the visual modality of signed languages allows its users not only to express meaning in a linear manner but also to use iconicity and multiple articulators together to encode information simultaneously. Accordingly, in cases when it is necessary to encode informatively rich events, signers can take advantage of simultaneous encoding in order to represent information about different referents and their actions simultaneously. This in turn would lead to more iconic and direct representation. Up to now, there has been no experimental study focusing on simultaneous encoding of information in signed languages and its possible advantage for efficient communication. In the present study, we assessed how many information units can be encoded simultaneously in Italian Sign Language (LIS) and whether the amount of simultaneously encoded information varies based on the amount of information that is required to be expressed. Twenty-three deaf adults participated in a director-matcher game in which they described 30 images of events that varied in amount of information they contained. Results revealed that as the information that had to be encoded increased, signers also increased use of multiple articulators to encode different information (i.e., kinematic simultaneity) and density of simultaneously encoded information in their production. Present findings show how the fundamental properties of signed languages, i.e., iconicity and simultaneity, are used for the purpose of efficient information encoding in Italian Sign Language (LIS).Additional information
Supplementary data -
Smalley, S. L., Kustanovich, V., Minassian, S. L., Stone, J. L., Ogdie, M. N., McGough, J. J., McCracken, J. T., MacPhie, I. L., Francks, C., Fisher, S. E., Cantor, R. M., Monaco, A. P., & Nelson, S. F. (2002). Genetic linkage of Attention-Deficit/Hyperactivity Disorder on chromosome 16p13, in a region implicated in autism. American Journal of Human Genetics, 71(4), 959-963. doi:10.1086/342732.
Abstract
Attention-deficit/hyperactivity disorder (ADHD) is the most commonly diagnosed behavioral disorder in childhood and likely represents an extreme of normal behavior. ADHD significantly impacts learning in school-age children and leads to impaired functioning throughout the life span. There is strong evidence for a genetic etiology of the disorder, although putative alleles, principally in dopamine-related pathways suggested by candidate-gene studies, have very small effect sizes. We use affected-sib-pair analysis in 203 families to localize the first major susceptibility locus for ADHD to a 12-cM region on chromosome 16p13 (maximum LOD score 4.2; P=.000005), building upon an earlier genomewide scan of this disorder. The region overlaps that highlighted in three genome scans for autism, a disorder in which inattention and hyperactivity are common, and physically maps to a 7-Mb region on 16p13. These findings suggest that variations in a gene on 16p13 may contribute to common deficits found in both ADHD and autism.
-
Snijders, T. M., Benders, T., & Fikkert, P. (2020). Infants segment words from songs - an EEG study. Brain Sciences, 10( 1): 39. doi:10.3390/brainsci10010039.
Abstract
Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech. -
Sønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V. and 133 moreSønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V., Caspers, S., Cavalleri, G., Chen, C.-H., Cichon, S., Ciufolini, S., Corvin, A., Crespo-Facorro, B., Curran, J. E., Dale, A. M., Dalvie, S., Dazzan, P., De Geus, E. J. C., De Zubicaray, G. I., De Zwarte, S. M. C., Delanty, N., Den Braber, A., Desrivières, S., Donohoe, G., Draganski, B., Ehrlich, S., Espeseth, T., Fisher, S. E., Franke, B., Frouin, V., Fukunaga, M., Gareau, T., Glahn, D. C., Grabe, H., Groenewold, N. A., Haavik, J., Håberg, A., Hashimoto, R., Hehir-Kwa, J. Y., Heinz, A., Hillegers, M. H. J., Hoffmann, P., Holleran, L., Hottenga, J.-J., Hulshoff, H. E., Ikeda, M., Jahanshad, N., Jernigan, T., Jockwitz, C., Johansson, S., Jonsdottir, G. A., Jönsson, E. G., Kahn, R., Kaufmann, T., Kelly, S., Kikuchi, M., Knowles, E. E. M., Kolskår, K. K., Kwok, J. B., Le Hellard, S., Leu, C., Liu, J., Lundervold, A. J., Lundervold, A., Martin, N. G., Mather, K., Mathias, S. R., McCormack, M., McMahon, K. L., McRae, A., Milaneschi, Y., Moreau, C., Morris, D., Mothersill, D., Mühleisen, T. W., Murray, R., Nordvik, J. E., Nyberg, L., Olde Loohuis, L. M., Ophoff, R., Paus, T., Pausova, Z., Penninx, B., Peralta, J. M., Pike, B., Prieto, C., Pudas, S., Quinlan, E., Quintana, D. S., Reinbold, C. S., Reis Marques, T., Reymond, A., Richard, G., Rodriguez-Herreros, B., Roiz-Santiañez, R., Rokicki, J., Rucker, J., Sachdev, P., Sanders, A.-M., Sando, S. B., Schmaal, L., Schofield, P. R., Schork, A. J., Schumann, G., Shin, J., Shumskaya, E., Sisodiya, S., Steen, V. M., Stein, D. J., Steinberg, S., Strike, L., Teumer, A., Thalamuthu, A., Tordesillas-Gutierrez, D., Turner, J., Ueland, T., Uhlmann, A., Ulfarsson, M. O., Van 't Ent, D., Van der Meer, D., Van Haren, N. E. M., Vaskinn, A., Vassos, E., Walters, G. B., Wang, Y., Wen, W., Whelan, C. D., Wittfeld, K., Wright, M., Yamamori, H., Zayats, T., Agartz, I., Westlye, L. T., Jacquemont, S., Djurovic, S., Stefansson, H., Stefansson, K., Thompson, P., & Andreassen, O. A. (2020). Dose response of the 16p11.2 distal copy number variant on intracranial volume and basal ganglia. Molecular Psychiatry, 25, 584-602. doi:10.1038/s41380-018-0118-1.
Abstract
Carriers of large recurrent copy number variants (CNVs) have a higher risk of developing neurodevelopmental disorders. The 16p11.2 distal CNV predisposes carriers to e.g., autism spectrum disorder and schizophrenia. We compared subcortical brain volumes of 12 16p11.2 distal deletion and 12 duplication carriers to 6882 non-carriers from the large-scale brain Magnetic Resonance Imaging collaboration, ENIGMA-CNV. After stringent CNV calling procedures, and standardized FreeSurfer image analysis, we found negative dose-response associations with copy number on intracranial volume and on regional caudate, pallidum and putamen volumes (β = −0.71 to −1.37; P < 0.0005). In an independent sample, consistent results were obtained, with significant effects in the pallidum (β = −0.95, P = 0.0042). The two data sets combined showed significant negative dose-response for the accumbens, caudate, pallidum, putamen and ICV (P = 0.0032, 8.9 × 10−6, 1.7 × 10−9, 3.5 × 10−12 and 1.0 × 10−4, respectively). Full scale IQ was lower in both deletion and duplication carriers compared to non-carriers. This is the first brain MRI study of the impact of the 16p11.2 distal CNV, and we demonstrate a specific effect on subcortical brain structures, suggesting a neuropathological pattern underlying the neurodevelopmental syndromes -
Speed, L. J., & Majid, A. (2020). Grounding language in the neglected senses of touch, taste, and smell. Cognitive Neuropsychology, 37(5-6), 363-392. doi:10.1080/02643294.2019.1623188.
Abstract
Grounded theories hold sensorimotor activation is critical to language processing. Such theories have focused predominantly on the dominant senses of sight and hearing. Relatively fewer studies have assessed mental simulation within touch, taste, and smell, even though they are critically implicated in communication for important domains, such as health and wellbeing. We review work that sheds light on whether perceptual activation from lesser studied modalities contribute to meaning in language. We critically evaluate data from behavioural, imaging, and cross-cultural studies. We conclude that evidence for sensorimotor simulation in touch, taste, and smell is weak. Comprehending language related to these senses may instead rely on simulation of emotion, as well as crossmodal simulation of the “higher” senses of vision and audition. Overall, the data suggest the need for a refinement of embodiment theories, as not all sensory modalities provide equally strong evidence for mental simulation. -
Spinelli, E., Cutler, A., & McQueen, J. M. (2002). Resolution of liaison for lexical access in French. Revue Française de Linguistique Appliquée, 7, 83-96.
Abstract
Spoken word recognition involves automatic activation of lexical candidates compatible with the perceived input. In running speech, words abut one another without intervening gaps, and syllable boundaries can mismatch with word boundaries. For instance, liaison in ’petit agneau’ creates a syllable beginning with a consonant although ’agneau’ begins with a vowel. In two cross-modal priming experiments we investigate how French listeners recognise words in liaison environments. These results suggest that the resolution of liaison in part depends on acoustic cues which distinguish liaison from non-liaison consonants, and in part on the availability of lexical support for a liaison interpretation. -
Stivers, T. (2002). 'Symptoms only' and 'Candidate diagnoses': Presenting the problem in pediatric encounters. Health Communication, 14(3), 299-338.
-
Stivers, T. (2002). Overt parent pressure for antibiotic medication in pediatric encounters. Social Science and Medicine, 54(7), 1111-1130.
-
Sumer, B., & Ozyurek, A. (2020). No effects of modality in development of locative expressions of space in signing and speaking children. Journal of Child Language, 47(6), 1101-1131. doi:10.1017/S0305000919000928.
Abstract
Linguistic expressions of locative spatial relations in sign languages are mostly visually- motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (3;5-9;11 years). Unlike previous reports suggesting a boosting effect of iconicity, and / or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children. -
Swingley, D., & Fernald, A. (2002). Recognition of words referring to present and absent objects by 24-month-olds. Journal of Memory and Language, 46(1), 39-56. doi:10.1006/jmla.2001.2799.
Abstract
Three experiments tested young children's efficiency in recognizing words in speech referring to absent objects. Seventy-two 24-month-olds heard sentences containing target words denoting objects that were or were not present in a visual display. Children's eye movements were monitored as they heard the sentences. Three distinct patterns of response were shown. Children hearing a familiar word that was an appropriate label for the currently fixated picture maintained their gaze. Children hearing a familiar word that could not apply to the currently fixated picture rapidly shifted their gaze to the alternative picture, whether that alternative was the named target or not, and then continued to search for an appropriate referent. Finally, children hearing an unfamiliar word shifted their gaze slowly and irregularly. This set of outcomes is interpreted as evidence that by 24 months, rapid activation in word recognition does not depend on the presence of the words' referents. Rather, very young children are capable of quickly and efficiently interpreting words in the absence of visual supporting context. -
Swingley, D., & Aslin, R. N. (2002). Lexical neighborhoods and the word-form representations of 14-month-olds. Psychological Science, 13(5), 480-484. doi:10.1111/1467-9280.00485.
Abstract
The degree to which infants represent phonetic detail in words has been a source of controversy in phonology and developmental psychology. One prominent hypothesis holds that infants store words in a vague or inaccurate form until the learning of similar–sounding neighbors forces attention to subtle phonetic distinctions. In the experiment reported here, we used a visual fixation task to assess word recognition. We present the first evidence indicating that, in fact, the lexical representations of 14– and 15–month–olds are encoded in fine detail, even when this detail is not functionally necessary for distinguishing similar words in the infant’s vocabulary. Exposure to words is sufficient for well–specified lexical representations, even well before the vocabulary spurt. These results suggest developmental continuity in infants’ representations of speech: As infants begin to build a vocabulary and learn word meanings, they use the perceptual abilities previously demonstrated in tasks testing the discrimination and categorization of meaningless syllables. -
Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.
Abstract
This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia. -
Tan, Y., & Hagoort, P. (2020). Catecholaminergic modulation of semantic processing in sentence comprehension. Cerebral Cortex, 30(12), 6426-6443. doi:10.1093/cercor/bhaa204.
Abstract
Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors. -
Ten Oever, S., Meierdierks, T., Duecker, F., De Graaf, T., & Sack, A. (2020). Phase-coded oscillatory ordering promotes the separation of closely matched representations to optimize perceptual discrimination. iScience, 23(7): 101282. doi:10.1016/j.isci.2020.101282.
Abstract
Low-frequency oscillations are proposed to be involved in separating neuronal representations belonging to different items. Although item-specific neuronal activity was found to cluster on different oscillatory phases, the influence of this mechanism on perception is unknown. Here, we investigated the perceptual consequences of neuronal item separation through oscillatory clustering. In an electroencephalographic experiment, participants categorized sounds parametrically varying in pitch, relative to an arbitrary pitch boundary. Pre-stimulus theta and alpha phase biased near-boundary sound categorization to one category or the other. Phase also modulated whether evoked neuronal responses contributed stronger to the fit of the sound envelope of one or another category. Intriguingly, participants with stronger oscillatory clustering (phase strongly biasing sound categorization) in the theta, but not alpha, range had steeper perceptual psychometric slopes (sharper sound category discrimination). These results indicate that neuronal sorting by phase directly influences subsequent perception and has a positive impact on discrimination performanceAdditional information
Supplemental Information -
Ten Oever, S., De Weerd, P., & Sack, A. T. (2020). Phase-dependent amplification of working memory content and performance. Nature Communications, 11: 1832. doi:10.1038/s41467-020-15629-7.
Abstract
Successful working memory performance has been related to oscillatory mechanisms operating in low-frequency ranges. Yet, their mechanistic interaction with the distributed neural activity patterns representing the content of the memorized information remains unclear. Here, we record EEG during a working memory retention interval, while a task-irrelevant, high-intensity visual impulse stimulus is presented to boost the read-out of distributed neural activity related to the content held in working memory. Decoding of this activity with a linear classifier reveals significant modulations of classification accuracy by oscillatory phase in the theta/alpha ranges at the moment of impulse presentation. Additionally, behavioral accuracy is highest at the phases showing maximized decoding accuracy. At those phases, behavioral accuracy is higher in trials with the impulse compared to no-impulse trials. This constitutes the first evidence in humans that working memory information is maximized within limited phase ranges, and that phase-selective, sensory impulse stimulation can improve working memory. -
Teng, X., Ma, M., Yang, J., Blohm, S., Cai, Q., & Tian, X. (2020). Constrained structure of ancient Chinese poetry facilitates speech content grouping. Current Biology, 30, 1299-1305. doi:10.1016/j.cub.2020.01.059.
Abstract
Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4, 5, 6, 7, 8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.Additional information
Supplemental Information -
Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. In Proceedings of the 7th GESPIN - Gesture and Speech in Interaction Conference. Stockholm: KTH Royal Institute of Technology.
Abstract
In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation.
In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of.
Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages. -
Ter Hark, S. E., Jamain, S., Schijven, D., Lin, B. D., Bakker, M. K., Boland-Auge, A., Deleuze, J.-F., Troudet, R., Malhotra, A. K., Gülöksüz, S., Vinkers, C. H., Ebdrup, B. H., Kahn, R. S., Leboyer, M., & Luykx, J. J. (2020). A new genetic locus for antipsychotic-induced weight gain: A genome-wide study of first-episode psychosis patients using amisulpride (from the OPTiMiSE cohort). Journal of Psychopharmacology, 34(5), 524-531. doi:10.1177/0269881120907972.
Abstract
Background:Antipsychotic-induced weight gain is a common and debilitating side effect of antipsychotics. Although genome-wide association studies of antipsychotic-induced weight gain have been performed, few genome-wide loci have been discovered. Moreover, these genome-wide association studies have included a wide variety of antipsychotic compounds.Aims:We aim to gain more insight in the genomic loci affecting antipsychotic-induced weight gain. Given the variable pharmacological properties of antipsychotics, we hypothesized that targeting a single antipsychotic compound would provide new clues about genomic loci affecting antipsychotic-induced weight gain.Methods:All subjects included for this genome-wide association study (n=339) were first-episode schizophrenia spectrum disorder patients treated with amisulpride and were minimally medicated (defined as antipsychotic use <2?weeks in the previous year and/or <6?weeks lifetime). Weight gain was defined as the increase in body mass index from before until approximately 1 month after amisulpride treatment.Results:Our genome-wide association analyses for antipsychotic-induced weight gain yielded one genome-wide significant hit (rs78310016; ?=1.05; p=3.66 ? 10?08; n=206) in a locus not previously associated with antipsychotic-induced weight gain or body mass index. Minor allele carriers had an odds ratio of 3.98 (p=1.0 ? 10?03) for clinically meaningful antipsychotic-induced weight gain (?7% of baseline weight). In silico analysis elucidated a chromatin interaction with 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1. In an attempt to replicate single-nucleotide polymorphisms previously associated with antipsychotic-induced weight gain, we found none were associated with amisulpride-induced weight gain.Conclusion:Our findings suggest the involvement of rs78310016 and possibly 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1 in antipsychotic-induced weight gain. In line with the unique binding profile of this atypical antipsychotic, our findings furthermore hint that biological mechanisms underlying amisulpride-induced weight gain differ from antipsychotic-induced weight gain by other atypical antipsychotics.Additional information
Supplementary_Figures_and_Tables_Optimise_GWAS.pdf -
Ter Keurs, M., Brown, C. M., & Hagoort, P. (2002). Lexical processing of vocabulary class in patients with Broca's aphasia: An event-related brain potential study on agrammatic comprehension. Neuropsychologia, 40(9), 1547-1561. doi:10.1016/S0028-3932(02)00025-8.
Abstract
This paper presents electrophysiological evidence of an impairment in the on-line processing of word class information in patients with Broca’s aphasia with agrammatic comprehension. Event-related brain potentials (ERPs) were recorded from the scalp while Broca patients and non-aphasic control subjects read open- and closed-class words that appeared one at a time on a PC screen. Separate waveforms were computed for open- and closed-class words. The non-aphasic control subjects showed a modulation of an early left anterior negativity in the 210–325 ms as a function of vocabulary class (VC), and a late left anterior negative shift to closed-class words in the 400–700 ms epoch. An N400 effect was present in both control subjects and Broca patients. We have taken the early electrophysiological differences to reflect the first availability of word-category information from the mental lexicon. The late differences can be related to post-lexical processing. In contrast to the control subjects, the Broca patients showed no early VC effect and no late anterior shift to closed-class words. The results support the view that an incomplete and/or delayed availability of word-class information might be an important factor in Broca’s agrammatic comprehension. -
Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.
Abstract
Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.
Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.
Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.
Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control.
Share this page