Publications

Displaying 501 - 600 of 611
  • Skirgård, H., Haynie, H. J., Blasi, D. E., Hammarström, H., Collins, J., Latarche, J. J., Lesage, J., Weber, T., Witzlack-Makarevich, A., Passmore, S., Chira, A., Maurits, L., Dinnage, R., Dunn, M., Reesink, G., Singer, R., Bowern, C., Epps, P. L., Hill, J., Vesakoski, O. Skirgård, H., Haynie, H. J., Blasi, D. E., Hammarström, H., Collins, J., Latarche, J. J., Lesage, J., Weber, T., Witzlack-Makarevich, A., Passmore, S., Chira, A., Maurits, L., Dinnage, R., Dunn, M., Reesink, G., Singer, R., Bowern, C., Epps, P. L., Hill, J., Vesakoski, O., Robbeets, M., Abbas, N. K., Auer, D., Bakker, N. A., Barbos, G., Borges, R. D., Danielsen, S., Dorenbusch, L., Dorn, E., Elliott, J., Falcone, G., Fischer, J., Ghanggo Ate, Y., Gibson, H., Göbel, H.-P., Goodall, J. A., Gruner, V., Harvey, A., Hayes, R., Heer, L., Herrera Miranda, R. E., Hübler, N., Huntington-Rainey, B. H., Ivani, J. K., Johns, M., Just, E., Kashima, E., Kipf, C., Klingenberg, J. V., König, N., Koti, A., Kowalik, R. G. A., Krasnoukhova, O., Lindvall, N. L. M., Lorenzen, M., Lutzenberger, H., Martins, T. R., Mata German, C., Van der Meer, S., Montoya Samamé, J., Müller, M., Muradoglu, S., Neely, K., Nickel, J., Norvik, M., Oluoch, C. A., Peacock, J., Pearey, I. O., Peck, N., Petit, S., Pieper, S., Poblete, M., Prestipino, D., Raabe, L., Raja, A., Reimringer, J., Rey, S. C., Rizaew, J., Ruppert, E., Salmon, K. K., Sammet, J., Schembri, R., Schlabbach, L., Schmidt, F. W., Skilton, A., Smith, W. D., De Sousa, H., Sverredal, K., Valle, D., Vera, J., Voß, J., Witte, T., Wu, H., Yam, S., Ye, J., Yong, M., Yuditha, T., Zariquiey, R., Forkel, R., Evans, N., Levinson, S. C., Haspelmath, M., Greenhill, S. J., Atkinson, Q., & Gray, R. D. (2023). Grambank reveals the importance of genealogical constraints on linguistic diversity and highlights the impact of language loss. Science Advances, 9(16): eadg6175. doi:10.1126/sciadv.adg6175.

    Abstract

    While global patterns of human genetic diversity are increasingly well characterized, the diversity of human languages remains less systematically described. Here, we outline the Grambank database. With over 400,000 data points and 2400 languages, Grambank is the largest comparative grammatical database available. The comprehensiveness of Grambank allows us to quantify the relative effects of genealogical inheritance and geographic proximity on the structural diversity of the world’s languages, evaluate constraints on linguistic diversity, and identify the world’s most unusual languages. An analysis of the consequences of language loss reveals that the reduction in diversity will be strikingly uneven across the major linguistic regions of the world. Without sustained efforts to document and revitalize endangered languages, our linguistic window into human history, cognition, and culture will be seriously fragmented.
  • Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2023). Delta-band neural responses to individual words are modulated by sentence processing. The Journal of Neuroscience, 43(26), 4867-4883. doi:10.1523/JNEUROSCI.0964-22.2023.

    Abstract

    To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step towards understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory- and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition around 100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study speak to how the neural representation of words is affected by structural context, and as such provide insight into how the brain instantiates compositionality in language.
  • Slim, M. S., & Hartsuiker, R. J. (2023). Moving visual world experiments online? A web-based replication of Dijkgraaf, Hartsuiker, and Duyck (2017) using PCIbex and WebGazer.js. Behavior Research Methods, 55, 3786-3804. doi:10.3758/s13428-022-01989-z.

    Abstract

    The visual world paradigm is one of the most influential paradigms to study real-time language processing. The present study tested whether visual world studies can be moved online, using PCIbex software (Zehr & Schwarz, 2018) and the WebGazer.js algorithm (Papoutsaki et al., 2016) to collect eye-movement data. Experiment 1 was a fixation task in which the participants looked at a fixation cross in multiple positions on the computer screen. Experiment 2 was a web-based replication of a visual world experiment by Dijkgraaf et al. (2017). Firstly, both experiments revealed that the spatial accuracy of the data allowed us to distinguish looks across the four quadrants of the computer screen. This suggest that the spatial resolution of WebGazer.js is fine-grained enough for most visual world experiments (which typically involve a two-by-two quadrant-based set-up of the visual display). Secondly, both experiments revealed a delay of roughly 300 ms in the time course of the eye movements, possibly caused by the internal processing speed of the browser or WebGazer.js. This delay can be problematic in studying questions that require a fine-grained temporal resolution and requires further investigation.
  • Slim, M. S., Lauwers, P., & Hartsuiker, R. J. (2023). How abstract are logical representations? The role of verb semantics in representing quantifier scope. Glossa Psycholinguistics, 2(1): 9. doi:10.5070/G6011175.

    Abstract

    Language comprehension involves the derivation of the meaning of sentences by combining the meanings of their parts. In some cases, this can lead to ambiguity. A sentence like Every hiker climbed a hill allows two logical representations: One that specifies that every hiker climbed a different hill and one that specifies that every hiker climbed the same hill. The interpretations of such sentences can be primed: Exposure to a particular reading increases the likelihood that the same reading will be assigned to a subsequent similar sentence. Feiman and Snedeker (2016) observed that such priming is not modulated by overlap of the verb between prime and target. This indicates that mental logical representations specify the compositional structure of the sentence meaning without conceptual meaning content. We conducted a close replication of Feiman and Snedeker’s experiment in Dutch and found no verb-independent priming. Moreover, a comparison with a previous, within-verb priming experiment showed an interaction, suggesting stronger verb-specific than abstract priming. A power analysis revealed that both Feiman and Snedeker’s experiment and our Experiment 1 were underpowered. Therefore, we replicated our Experiment 1, using the sample size guidelines provided by our power analysis. This experiment again showed that priming was stronger if a prime-target pair contained the same verb. Together, our experiments show that logical representation priming is enhanced if the prime and target sentence contain the same verb. This suggests that logical representations specify compositional structure and meaning features in an integrated manner.
  • Sloetjes, H. (2013). The ELAN annotation tool. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 193-198). Frankfurt a/M: Lang.
  • Sloetjes, H. (2013). Step by step introduction in NEUROGES coding with ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 201-212). Frankfurt a/M: Lang.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4: 528. doi:10.3389/fpsyg.2013.00528.

    Abstract

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effects of formal literacy training on language mediated visual attention. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3420-3425). Austin, TX: Cognitive Science Society.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze is partly determined by level of formal literacy training. Huettig, Singh and Mishra (2011) showed that high-literate individuals' eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display. In contrast, low-literate individuals' eye gaze was not related to phonological overlap, but was instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behavior is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on more coarse grained structure. This hypothesis was tested using a neural network model, that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behavior similar to those observed between high and low literates emerge when models are trained on speech signals of contrasting granularity.
  • Snijders, T. M., Milivojevic, B., & Kemner, C. (2013). Atypical excitation-inhibition balance in autism captured by the gamma response to contextual modulation. NeuroImage: Clinical, 3, 65-72. doi:10.1016/j.nicl.2013.06.015.

    Abstract

    Atypical visual perception in people with autism spectrum disorders (ASD) is hypothesized to stem from an imbalance in excitatory and inhibitory processes in the brain. We used neuronal oscillations in the gamma frequency range (30 – 90 Hz), which emerge from a balanced interaction of excitation and inhibition in the brain, to assess contextual modulation processes in early visual perception. Electroencephalography was recorded in 12 high-functioning adults with ASD and 12 age- and IQ-matched control participants. Oscilla- tions in the gamma frequency range were analyzed in response to stimuli consisting of small line-like elements. Orientation-speci fi c contextual modulation was manipulated by parametrically increasing the amount of homogeneously oriented elements in the stimuli. The stimuli elicited a strong steady-state gamma response around the refresh-rate of 60 Hz, which was larger for controls than for participants with ASD. The amount of orientation homogeneity (contextual modulation) in fl uenced the gamma response in control subjects, while for subjects with ASD this was not the case. The atypical steady-state gamma response to contextual modulation in subjects with ASD may capture the link between an imbalance in excitatory and inhibitory neuronal processing and atypical visual processing in ASD
  • Snijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E. and 29 moreSnijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E., Keena, B., Hakonarson, H., Roos, A., Kohlschmidt, N., Cereda, A., Iascone, M., Rebessi, E., Kernohan, K. D., Campeau, P. M., Millan, F., Taylor, J. A., Lochmüller, H., Higgs, M. R., Goula, A., Bernhard, B., Velasco, D. J., Schmanski, A. A., Stark, Z., Gallacher, L., Pais, L., Marcogliese, P. C., Yamamoto, S., Raun, N., Jakub, T. E., Kramer, J. M., Den Hoed, J., Fisher, S. E., Brunner, H. G., & Kleefstra, T. (2023). A clustering of heterozygous missense variants in the crucial chromatin modifier WDR5 defines a new neurodevelopmental disorder. Human Genetics and Genomics Advances, 4(1): 100157. doi:10.1016/j.xhgg.2022.100157.

    Abstract

    WDR5 is a broadly studied, highly conserved key protein involved in a wide array of biological functions. Among these functions, WDR5 is a part of several protein complexes that affect gene regulation via post-translational modification of histones. We collected data from 11 unrelated individuals with six different rare de novo germline missense variants in WDR5; one identical variant was found in five individuals, and another variant in two individuals. All individuals had neurodevelopmental disorders including speech/language delays (N=11), intellectual disability (N=9), epilepsy (N=7) and autism spectrum disorder (N=4). Additional phenotypic features included abnormal growth parameters (N=7), heart anomalies (N=2) and hearing loss (N=2). Three-dimensional protein structures indicate that all the residues affected by these variants are located at the surface of one side of the WDR5 protein. It is predicted that five out of the six amino acid substitutions disrupt interactions of WDR5 with RbBP5 and/or KMT2A/C, as part of the COMPASS (complex proteins associated with Set1) family complexes. Our experimental approaches in Drosophila melanogaster and human cell lines show normal protein expression, localization and protein-protein interactions for all tested variants. These results, together with the clustering of variants in a specific region of WDR5 and the absence of truncating variants so far, suggest that dominant-negative or gain-of-function mechanisms might be at play. All in all, we define a neurodevelopmental disorder associated with missense variants in WDR5 and a broad range of features. This finding highlights the important role of genes encoding COMPASS family proteins in neurodevelopmental disorders.
  • Söderström, P., & Cutler, A. (2023). Early neuro-electric indication of lexical match in English spoken-word recognition. PLOS ONE, 18(5): e0285286. doi:10.1371/journal.pone.0285286.

    Abstract

    We investigated early electrophysiological responses to spoken English words embedded in neutral sentence frames, using a lexical decision paradigm. As words unfold in time, similar-sounding lexical items compete for recognition within 200 milliseconds after word onset. A small number of studies have previously investigated event-related potentials in this time window in English and French, with results differing in direction of effects as well as component scalp distribution. Investigations of spoken-word recognition in Swedish have reported an early left-frontally distributed event-related potential that increases in amplitude as a function of the probability of a successful lexical match as the word unfolds. Results from the present study indicate that the same process may occur in English: we propose that increased certainty of a ‘word’ response in a lexical decision task is reflected in the amplitude of an early left-anterior brain potential beginning around 150 milliseconds after word onset. This in turn is proposed to be connected to the probabilistically driven activation of possible upcoming word forms.

    Additional information

    The datasets are available here
  • Soheili-Nezhad, S., Sprooten, E., Tendolkar, I., & Medici, M. (2023). Exploring the genetic link between thyroid dysfunction and common psychiatric disorders: A specific hormonal or a general autoimmune comorbidity. Thyroid, 33(2), 159-168. doi:10.1089/thy.2022.0304.

    Abstract

    Background: The hypothalamus-pituitary-thyroid axis coordinates brain development and postdevelopmental function. Thyroid hormone (TH) variations, even within the normal range, have been associated with the risk of developing common psychiatric disorders, although the underlying mechanisms remain poorly understood.

    Methods: To get new insight into the potentially shared mechanisms underlying thyroid dysfunction and psychiatric disorders, we performed a comprehensive analysis of multiple phenotypic and genotypic databases. We investigated the relationship of thyroid disorders with depression, bipolar disorder (BIP), and anxiety disorders (ANXs) in 497,726 subjects from U.K. Biobank. We subsequently investigated genetic correlations between thyroid disorders, thyrotropin (TSH), and free thyroxine (fT4) levels, with the genome-wide factors that predispose to psychiatric disorders. Finally, the observed global genetic correlations were furthermore pinpointed to specific local genomic regions.

    Results: Hypothyroidism was positively associated with an increased risk of major depressive disorder (MDD; OR = 1.31, p = 5.29 × 10−89), BIP (OR = 1.55, p = 0.0038), and ANX (OR = 1.16, p = 6.22 × 10−8). Hyperthyroidism was associated with MDD (OR = 1.11, p = 0.0034) and ANX (OR = 1.34, p = 5.99 × 10−⁶). Genetically, strong coheritability was observed between thyroid disease and both major depressive (rg = 0.17, p = 2.7 × 10−⁴) and ANXs (rg = 0.17, p = 6.7 × 10−⁶). This genetic correlation was particularly strong at the major histocompatibility complex locus on chromosome 6 (p < 10−⁵), but further analysis showed that other parts of the genome also contributed to this global effect. Importantly, neither TSH nor fT4 levels were genetically correlated with mood disorders.

    Conclusions: Our findings highlight an underlying association between autoimmune hypothyroidism and mood disorders, which is not mediated through THs and in which autoimmunity plays a prominent role. While these findings could shed new light on the potential ineffectiveness of treating (minor) variations in thyroid function in psychiatric disorders, further research is needed to identify the exact underlying molecular mechanisms.

    Additional information

    supplementary table S1
  • Sollis, E., Den Hoed, J., Quevedo, M., Estruch, S. B., Vino, A., Dekkers, D. H. W., Demmers, J. A. A., Poot, R., Derizioti, P., & Fisher, S. E. (2023). Characterization of the TBR1 interactome: Variants associated with neurodevelopmental disorders disrupt novel protein interactions. Human Molecular Genetics, 32(9): ddac311, pp. 1497-1510. doi:10.1093/hmg/ddac311.

    Abstract

    TBR1 is a neuron-specific transcription factor involved in brain development and implicated in a neurodevelopmental disorder (NDD) combining features of autism spectrum disorder (ASD), intellectual disability (ID) and speech delay. TBR1 has been previously shown to interact with a small number of transcription factors and co-factors also involved in NDDs (including CASK, FOXP1/2/4 and BCL11A), suggesting that the wider TBR1 interactome may have a significant bearing on normal and abnormal brain development. Here we have identified approximately 250 putative TBR1-interaction partners by affinity purification coupled to mass spectrometry. As well as known TBR1-interactors such as CASK, the identified partners include transcription factors and chromatin modifiers, along with ASD- and ID-related proteins. Five interaction candidates were independently validated using bioluminescence resonance energy transfer assays. We went on to test the interaction of these candidates with TBR1 protein variants implicated in cases of NDD. The assays uncovered disturbed interactions for NDD-associated variants and identified two distinct protein-binding domains of TBR1 that have essential roles in protein–protein interaction.
  • Stärk, K., Kidd, E., & Frost, R. L. A. (2023). Close encounters of the word kind: Attested distributional information boosts statistical learning. Language Learning, 73(2), 341-373. doi:10.1111/lang.12523.

    Abstract

    Statistical learning, the ability to extract regularities from input (e.g., in language), is likely supported by learners’ prior expectations about how component units co-occur. In this study, we investigated how adults’ prior experience with sublexical regularities in their native language influences performance on an empirical language learning task. Forty German-speaking adults completed a speech repetition task in which they repeated eight-syllable sequences from two experimental languages: one containing disyllabic words comprised of frequently occurring German syllable transitions (naturalistic words) and the other containing words made from unattested syllable transitions (non-naturalistic words). The participants demonstrated learning from both naturalistic and non-naturalistic stimuli. However, learning was superior for the naturalistic sequences, indicating that the participants had used their existing distributional knowledge of German to extract the naturalistic words faster and more accurately than the non-naturalistic words. This finding supports theories of statistical learning as a form of chunking, whereby frequently co-occurring units become entrenched in long-term memory.

    Additional information

    accessible summary appendix S1
  • Starreveld, P. A., La Heij, W., & Verdonschot, R. G. (2013). Time course analysis of the effects of distractor frequency and categorical relatedness in picture naming: An evaluation of the response exclusion account. Language and Cognitive Processes, 28(5), 633-654. doi:10.1080/01690965.2011.608026.

    Abstract

    The response exclusion account (REA), advanced by Mahon and colleagues, localises the distractor frequency effect and the semantic interference effect in picture naming at the level of the response output buffer. We derive four predictions from the REA: (1) the size of the distractor frequency effect should be identical to the frequency effect obtained when distractor words are read aloud, (2) the distractor frequency effect should not change in size when stimulus-onset asynchrony (SOA) is manipulated, (3) the interference effect induced by a distractor word (as measured from a nonword control distractor) should increase in size with increasing SOA, and (4) the word frequency effect and the semantic interference effect should be additive. The results of the picture-naming task in Experiment 1 and the word-reading task in Experiment 2 refute all four predictions. We discuss a tentative account of the findings obtained within a traditional selection-by-competition model in which both context effects are localised at the level of lexical selection.
  • Stephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J. and 105 moreStephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J., Michel, M., Lyytikäinen, L.-P., Shaffer, J., Short, S., Sun, J., Teumer, A., Thompson, J., Vogelzangs, N., Vink, J., Wenzlaff, A., Wheeler, W., Yang, B.-Z., Aggen, S., Balmforth, A., Baumesiter, S., Beaty, T., Benjamin, D., Bergen, A., Broms, U., Cesarini, D., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D., Foround, T., Furberg, H., Giegling, I., Gillespie, N., Gu, F.,.Hall, A., Hällfors, J., Han, S., Hartmann, A., Heikkilä, K., Hickie, I., Hottenga, J., Jousilahti, P., Kaakinen, M., Kähönen, M., Koellinger, P., Kittner, S., Konte, B., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S., Mathias, R., McNeil, D., Medlund, S., Montgomery, G., Murray, T., Nauck, M., North, K., Paré, P., Pergadia, M., Ruczinski, I., Salomaa, V., Viikari, J., Willemsen, G., Barnes, K., Boerwinkle, E., Boomsma, D., Caporaso, N., Edenberg, H., Francks, C., Gelernter, J., Grabe, H., Hops, H., Jarvelin, M.-R., Johannesson, M., Kendler, K., Lehtimäki, T., Magnusson, P., Marazita, M., Marchini, J., Mitchell, B., Nöthen, M., Penninx, B., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N., Schwartz, A., Shete, S., Spitz, M., Swan, G., Völzke, H., Veijola, J., Wei, Q., Amos, C., Canon, D., Grucza, R., Hatsukami, D., Heath, A., Johnson, E., Kaprio, J., Madden, P., Martin, N., Stevens, V., Weiss, R., Kraft, P., Bierut, L., & Ehringer, M. (2013). Distinct Loci in the CHRNA5/CHRNA3/CHRNB4 Gene Cluster are Associated with Onset of Regular Smoking. Genetic Epidemiology, 37, 846-859. doi:10.1002/gepi.21760.

    Abstract

    Neuronal nicotinic acetylcholine receptor (nAChR) genes (CHRNA5/CHRNA3/CHRNB4) have been reproducibly associated with nicotine dependence, smoking behaviors, and lung cancer risk. Of the few reports that have focused on early smoking behaviors, association results have been mixed. This meta-analysis examines early smoking phenotypes and SNPs in the gene cluster to determine: (1) whether the most robust association signal in this region (rs16969968) for other smoking behaviors is also associated with early behaviors, and/or (2) if additional statistically independent signals are important in early smoking. We focused on two phenotypes: age of tobacco initiation (AOI) and age of first regular tobacco use (AOS). This study included 56,034 subjects (41 groups) spanning nine countries and evaluated five SNPs including rs1948, rs16969968, rs578776, rs588765, and rs684513. Each dataset was analyzed using a centrally generated script. Meta-analyses were conducted from summary statistics. AOS yielded significant associations with SNPs rs578776 (beta = 0.02, P = 0.004), rs1948 (beta = 0.023, P = 0.018), and rs684513 (beta = 0.032, P = 0.017), indicating protective effects. There were no significant associations for the AOI phenotype. Importantly, rs16969968, the most replicated signal in this region for nicotine dependence, cigarettes per day, and cotinine levels, was not associated with AOI (P = 0.59) or AOS (P = 0.92). These results provide important insight into the complexity of smoking behavior phenotypes, and suggest that association signals in the CHRNA5/A3/B4 gene cluster affecting early smoking behaviors may be different from those affecting the mature nicotine dependence phenotype

    Files private

    Request files
  • Stern, G. (2023). On embodied use of recognitional demonstratives. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527204.

    Abstract

    This study focuses on embodied uses of recognitional
    demonstratives. While multimodal conversation analytic
    studies have shown how gesture and speech interact in the
    elaboration of exophoric references, little attention has been
    given to the multimodal configuration of other types of
    referential actions. Based on a video-recorded corpus of
    professional meetings held in French, this qualitative study
    shows that a subtype of deictic references, namely recognitional
    references, are frequently associated with iconic gestures, thus
    challenging the traditional distinction between exophoric and
    endophoric uses of deixis.
  • Stewart, L., Verdonschot, R. G., Nasralla, P., & Lanipekun, J. (2013). Action–perception coupling in pianists: Learned mappings or spatial musical association of response codes (SMARC) effect? Quarterly Journal of Experimental Psychology, 66(1), 37-50. doi:10.1080/17470218.2012.687385.

    Abstract

    The principle of common coding suggests that a joint representation is formed when actions are repeatedly paired with a specific perceptual event. Musicians are occupationally specialized with regard to the coupling between actions and their auditory effects. In the present study, we employed a novel paradigm to demonstrate automatic action–effect associations in pianists. Pianists and nonmusicians pressed keys according to aurally presented number sequences. Numbers were presented at pitches that were neutral, congruent, or incongruent with respect to pitches that would normally be produced by such actions. Response time differences were seen between congruent and incongruent sequences in pianists alone. A second experiment was conducted to determine whether these effects could be attributed to the existence of previously documented spatial/pitch compatibility effects. In a “stretched” version of the task, the pitch distance over which the numbers were presented was enlarged to a range that could not be produced by the hand span used in Experiment 1. The finding of a larger response time difference between congruent and incongruent trials in the original, standard, version compared with the stretched version, in pianists, but not in nonmusicians, indicates that the effects obtained are, at least partially, attributable to learned action effects.
  • Stivers, T., Rossi, G., & Chalfoun, A. (2023). Ambiguities in action ascription. Social Forces, 101(3), 1552-1579. doi:10.1093/sf/soac021.

    Abstract

    In everyday interactions with one another, speakers not only say things but also do things like offer, complain, reject, and compliment. Through observation, it is possible to see that much of the time people unproblematically understand what others are doing. Research on conversation has further documented how speakers’ word choice, prosody, grammar, and gesture all help others to recognize what actions they are performing. In this study, we rely on spontaneous naturally occurring conversational data where people have trouble making their actions understood to examine what leads to ambiguous actions, bringing together prior research and identifying recurrent types of ambiguity that hinge on different dimensions of social action. We then discuss the range of costs and benefits for social actors when actions are clear versus ambiguous. Finally, we offer a conceptual model of how, at a microlevel, action ascription is done. Actions in interaction are building blocks for social relations; at each turn, an action can strengthen or strain the bond between two individuals. Thus, a unified theory of action ascription at a microlevel is an essential component for broader theories of social action and of how social actions produce, maintain, and revise the social world.
  • Stivers, T., & Sidnell, J. (Eds.). (2013). The handbook on conversation analysis. Malden, MA: Wiley-Blackwell.

    Abstract

    Presenting a comprehensive, state-of-the-art overview of theoretical and descriptive research in the field, The Handbook of Conversation Analysis brings together contributions by leading international experts to provide an invaluable information resource and reference for scholars of social interaction across the areas of conversation analysis, discourse analysis, linguistic anthropology, interpersonal communication, discursive psychology and sociolinguistics. Ideal as an introduction to the field for upper level undergraduates and as an in-depth review of the latest developments for graduate level students and established scholars Five sections outline the history and theory, methods, fundamental concepts, and core contexts in the study of conversation, as well as topics central to conversation analysis Written by international conversation analysis experts, the book covers a wide range of topics and disciplines, from reviewing underlying structures of conversation, to describing conversation analysis' relationship to anthropology, communication, linguistics, psychology, and sociology
  • Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.

    Abstract

    Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.
  • Stolk, A., Todorovic, A., Schoffelen, J.-M., & Oostenveld, R. (2013). Online and offline tools for head movement compensation in MEG. NeuroImage, 68, 39-48. doi:10.1016/j.neuroimage.2012.11.047.

    Abstract

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Sumner, M., Kurumada, C., Gafter, R., & Casillas, M. (2013). Phonetic variation and the recognition of words with pronunciation variants. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3486-3492). Austin, TX: Cognitive Science Society.
  • Tamaoka, K., Sakai, H., Miyaoka, Y., Ono, H., Fukuda, M., Wu, Y., & Verdonschot, R. G. (2023). Sentential inference bridging between lexical/grammatical knowledge and text comprehension among native Chinese speakers learning Japanese. PLoS One, 18(4): e0284331. doi:10.1371/journal.pone.0284331.

    Abstract

    The current study explored the role of sentential inference in connecting lexical/grammatical knowledge and overall text comprehension in foreign language learning. Using structural equation modeling (SEM), causal relationships were examined between four latent variables: lexical knowledge, grammatical knowledge, sentential inference, and text comprehension. The study analyzed 281 Chinese university students learning Japanese as a second language and compared two causal models: (1) the partially-mediated model, which suggests that lexical knowledge, grammatical knowledge, and sentential inference concurrently influence text comprehension, and (2) the wholly-mediated model, which posits that both lexical and grammatical knowledge impact sentential inference, which then further affects text comprehension. The SEM comparison analysis supported the wholly-mediated model, showing sequential causal relationships from lexical knowledge to sentential inference and then to text comprehension, without significant contribution from grammatical knowledge. The results indicate that sentential inference serves as a crucial bridge between lexical knowledge and text comprehension.
  • Tamaoka, K., Zhang, J., Koizumi, M., & Verdonschot, R. G. (2023). Phonological encoding in Tongan: An experimental investigation. Quarterly Journal of Experimental Psychology, 76(10), 2197-2430. doi:10.1177/17470218221138770.

    Abstract

    This study is the first to report chronometric evidence on Tongan language production. It has been speculated that the mora plays an important role during Tongan phonological encoding. A mora follows the (C)V form, so /a/ and /ka/ (but not /k/) denote a mora in Tongan. Using a picture-word naming paradigm, Tongan native speakers named pictures containing superimposed non-word distractors. This task has been used before in Japanese, Korean, and Vietnamese to investigate the initially selected unit during phonological encoding (IPU). Compared to control distractors, both onset and mora overlapping distractors resulted in faster naming latencies. Several alternative explanations for the pattern of results - proficiency in English, knowledge of Latin script, and downstream effects - are discussed. However, we conclude that Tongan phonological encoding likely natively uses the phoneme, and not the mora, as the IPU..

    Additional information

    supplemental material
  • Tan, Y., Martin, R. C., & Van Dyke, J. (2013). Verbal WM capacities in sentence comprehension: Evidence from aphasia. Procedia - Social and Behavioral Sciences, 94, 108-109. doi:10.1016/j.sbspro.2013.09.052.
  • Tatsumi, T., & Sala, G. (2023). Learning conversational dependency: Children’s response usingunin Japanese. Journal of Child Language, 50(5), 1226-1244. doi:10.1017/S0305000922000344.

    Abstract

    This study investigates how Japanese-speaking children learn interactional dependencies in conversations that determine the use of un, a token typically used as a positive response for yes-no questions, backchannel, and acknowledgement. We hypothesise that children learn to produce un appropriately by recognising different types of cues occurring in the immediately preceding turns. We built a set of generalised linear models on the longitudinal conversation data from seven children aged 1 to 5 years and their caregivers. Our models revealed that children not only increased their un production, but also learned to attend relevant cues in the preceding turns to understand when to respond by producing un. Children increasingly produced un when their interlocutors asked a yes-no question or signalled the continuation of their own speech. These results illustrate how children learn the probabilistic dependency between adjacent turns, and become able to participate in conversational interactions.
  • Ten Oever, S., Sack, A. T., Wheat, K. L., Bien, N., & Van Atteveldt, N. (2013). Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs. Frontiers in Psychology, 4: 331. doi:10.3389/fpsyg.2013.00331.

    Abstract

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Tezcan, F., Weissbart, H., & Martin, A. E. (2023). A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension. eLife, 12: e82386. doi:10.7554/eLife.82386.

    Abstract

    When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Tkalcec, A., Bierlein, M., Seeger‐Schneider, G., Walitza, S., Jenny, B., Menks, W. M., Felhbaum, L. V., Borbas, R., Cole, D. M., Raschle, N., Herbrecht, E., Stadler, C., & Cubillo, A. (2023). Empathy deficits, callous‐unemotional traits and structural underpinnings in autism spectrum disorder and conduct disorder youth. Autism Research, 16(10), 1946-1962. doi:10.1002/aur.2993.

    Abstract

    Distinct empathy deficits are often described in patients with conduct disorder (CD) and autism spectrum disorder (ASD) yet their neural underpinnings and the influence of comorbid Callous-Unemotional (CU) traits are unclear. This study compares the cognitive (CE) and affective empathy (AE) abilities of youth with CD and ASD, their potential neuroanatomical correlates, and the influence of CU traits on empathy. Adolescents and parents/caregivers completed empathy questionnaires (N = 148 adolescents, mean age = 15.16 years) and T1 weighted images were obtained from a subsample (N = 130). Group differences in empathy and the influence of CU traits were investigated using Bayesian analyses and Voxel-Based Morphometry with Threshold-Free Cluster Enhancement focusing on regions involved in AE (insula, amygdala, inferior frontal gyrus and cingulate cortex) and CE processes (ventromedial prefrontal cortex, temporoparietal junction, superior temporal gyrus, and precuneus). The ASD group showed lower parent-reported AE and CE scores and lower self-reported CE scores while the CD group showed lower parent-reported CE scores than controls. When accounting for the influence of CU traits no AE deficits in ASD and CE deficits in CD were found, but CE deficits in ASD remained. Across all participants, CU traits were negatively associated with gray matter volumes in anterior cingulate which extends into the mid cingulate, ventromedial prefrontal cortex, and precuneus. Thus, although co-occurring CU traits have been linked to global empathy deficits in reports and underlying brain structures, its influence on empathy aspects might be disorder-specific. Investigating the subdimensions of empathy may therefore help to identify disorder-specific empathy deficits.
  • Tomasek, M., Ravignani, A., Boucherie, P. H., Van Meyel, S., & Dufour, V. (2023). Spontaneous vocal coordination of vocalizations to water noise in rooks (Corvus frugilegus): An exploratory study. Ecology and Evolution, 13(2): e9791. doi:10.1002/ece3.9791.

    Abstract

    The ability to control one's vocal production is a major advantage in acoustic communication. Yet, not all species have the same level of control over their vocal output. Several bird species can interrupt their song upon hearing an external stimulus, but there is no evidence how flexible this behavior is. Most research on corvids focuses on their cognitive abilities, but few studies explore their vocal aptitudes. Recent research shows that crows can be experimentally trained to vocalize in response to a brief visual stimulus. Our study investigated vocal control abilities with a more ecologically embedded approach in rooks. We show that two rooks could spontaneously coordinate their vocalizations to a long-lasting stimulus (the sound of their small bathing pool being filled with a water hose), one of them adjusting roughly (in the second range) its vocalizations as the stimuli began and stopped. This exploratory study adds to the literature showing that corvids, a group of species capable of cognitive prowess, are indeed able to display good vocal control abilities.
  • Tornero, D., Wattananit, S., Madsen, M. G., Koch, P., Wood, J., Tatarishvili, J., Mine, Y., Ge, R., Monni, E., Devaraju, K., Hevner, R. F., Bruestle, O., Lindval, O., & Kokaia, Z. (2013). Human induced pluripotent stem cell-derived cortical neurons integrate in stroke-injured cortex and improve functional recovery. Brain, 136(12), 3561-3577. doi:10.1093/brain/awt278.

    Abstract

    Stem cell-based approaches to restore function after stroke through replacement of dead neurons require the generation of specific neuronal subtypes. Loss of neurons in the cerebral cortex is a major cause of stroke-induced neurological deficits in adult humans. Reprogramming of adult human somatic cells to induced pluripotent stem cells is a novel approach to produce patient-specific cells for autologous transplantation. Whether such cells can be converted to functional cortical neurons that survive and give rise to behavioural recovery after transplantation in the stroke-injured cerebral cortex is not known. We have generated progenitors in vitro, expressing specific cortical markers and giving rise to functional neurons, from long-term self-renewing neuroepithelial-like stem cells, produced from adult human fibroblast-derived induced pluripotent stem cells. At 2 months after transplantation into the stroke-damaged rat cortex, the cortically fated cells showed less proliferation and more efficient conversion to mature neurons with morphological and immunohistochemical characteristics of a cortical phenotype and higher axonal projection density as compared with non-fated cells. Pyramidal morphology and localization of the cells expressing the cortex-specific marker TBR1 in a certain layered pattern provided further evidence supporting the cortical phenotype of the fated, grafted cells, and electrophysiological recordings demonstrated their functionality. Both fated and non-fated cell-transplanted groups showed bilateral recovery of the impaired function in the stepping test compared with vehicle-injected animals. The behavioural improvement at this early time point was most likely not due to neuronal replacement and reconstruction of circuitry. At 5 months after stroke in immunocompromised rats, there was no tumour formation and the grafted cells exhibited electrophysiological properties of mature neurons with evidence of integration in host circuitry. Our findings show, for the first time, that human skin-derived induced pluripotent stem cells can be differentiated to cortical neuronal progenitors, which survive, differentiate to functional neurons and improve neurological outcome after intracortical implantation in a rat stroke model.
  • Trujillo, J. P., & Holler, J. (2023). Interactionally embedded gestalt principles of multimodal human communication. Perspectives on Psychological Science, 18(5), 1136-1159. doi:10.1177/17456916221141422.

    Abstract

    Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
  • Trujillo, J. P., Dideriksen, C., Tylén, K., Christiansen, M. H., & Fusaroli, R. (2023). The dynamic interplay of kinetic and linguistic coordination in Danish and Norwegian conversation. Cognitive Science, 47(6): e13298. doi:10.1111/cogs.13298.

    Abstract

    In conversation, individuals work together to achieve communicative goals, complementing and aligning language and body with each other. An important emerging question is whether interlocutors entrain with one another equally across linguistic levels (e.g., lexical, syntactic, and semantic) and modalities (i.e., speech and gesture), or whether there are complementary patterns of behaviors, with some levels or modalities diverging and others converging in coordinated fashions. This study assesses how kinematic and linguistic entrainment interact with one another across levels of measurement, and according to communicative context. We analyzed data from two matched corpora of dyadic interaction between—respectively—Danish and Norwegian native speakers engaged in affiliative conversations and task-oriented conversations. We assessed linguistic entrainment at the lexical, syntactic, and semantic level, and kinetic alignment of the head and hands using video-based motion tracking and dynamic time warping. We tested whether—across the two languages—linguistic alignment correlates with kinetic alignment, and whether these kinetic-linguistic associations are modulated either by the type of conversation or by the language spoken. We found that kinetic entrainment was positively associated with low-level linguistic (i.e., lexical) entrainment, while negatively associated with high-level linguistic (i.e., semantic) entrainment, in a cross-linguistically robust way. Our findings suggest that conversation makes use of a dynamic coordination of similarity and complementarity both between individuals as well as between different communicative modalities, and provides evidence for a multimodal, interpersonal synergy account of interaction.
  • Trupp, M. D., Bignardi, G., Specker, E., Vessel, E. A., & Pelowski, M. (2023). Who benefits from online art viewing, and how: The role of pleasure, meaningfulness, and trait aesthetic responsiveness in computer-based art interventions for well-being. Computers in Human Behavior, 145: 107764. doi:10.1016/j.chb.2023.107764.

    Abstract

    When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.

    Additional information

    supplementary material
  • Tsuji, S., & Cristia, A. (2013). Fifty years of infant vowel discrimination research: What have we learned? Journal of the Phonetic Society of Japan, 17(3), 1-11.
  • Turco, G., Dimroth, C., & Braun, B. (2013). Intonational means to mark verum focus in German and French. Language and Speech., 56(4), 461-491. doi:10.1177/0023830912460506.

    Abstract

    German and French differ in a number of aspects. Regarding the prosody-pragmatics interface, German is said to have a direct focus-to-accent mapping, which is largely absent in French – owing to strong structural constraints. We used a semi-spontaneous dialogue setting to investigate the intonational marking of Verum Focus, a focus on the polarity of an utterance in the two languages (e.g. the child IS tearing the banknote as an opposite claim to the child is not tearing the banknote). When Verum Focus applies to auxiliaries, pragmatic aspects (i.e. highlighting the contrast) directly compete with structural constraints (e.g. avoiding an accent on phonologically weak elements such as monosyllabic function words). Intonational analyses showed that auxiliaries were predominantly accented in German, as expected. Interestingly, we found a high number of (as yet undocumented) focal accents on phrase-initial auxiliaries in French Verum Focus contexts. When French accent patterns were equally distributed across information structural contexts, relative prominence (in terms of peak height) between initial and final accents was shifted towards initial accents in Verum Focus compared to non-Verum Focus contexts. Our data hence suggest that French also may mark Verum Focus by focal accents but that this tendency is partly overridden by strong structural constraints.
  • Uhrig, P., Payne, E., Pavlova, I., Burenko, I., Dykes, N., Baltazani, M., Burrows, E., Hale, S., Torr, P., & Wilson, A. (2023). Studying time conceptualisation via speech, prosody, and hand gesture: Interweaving manual and computational methods of analysis. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527220.

    Abstract

    This paper presents a new interdisciplinary methodology for the
    analysis of future conceptualisations in big messy media data.
    More specifically, it focuses on the depictions of post-Covid
    futures by RT during the pandemic, i.e. on data which are of
    interest not just from the perspective of academic research but
    also of policy engagement. The methodology has been
    developed to support the scaling up of fine-grained data-driven
    analysis of discourse utterances larger than individual lexical
    units which are centred around ‘will’ + the infinitive. It relies
    on the true integration of manual analytical and computational
    methods and tools in researching three modalities – textual,
    prosodic1, and gestural. The paper describes the process of
    building a computational infrastructure for the collection and
    processing of video data, which aims to empower the manual
    analysis. It also shows how manual analysis can motivate the
    development of computational tools. The paper presents
    individual computational tools to demonstrate how the
    combination of human and machine approaches to analysis can
    reveal new manifestations of cohesion between gesture and
    prosody. To illustrate the latter, the paper shows how the
    boundaries of prosodic units can work to help determine the
    boundaries of gestural units for future conceptualisations.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. In R. Skarnitzl, & J. Volín (Eds.), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 96-100). Prague: Guarant International.

    Abstract

    Over the course of a conversation, interlocutors sound more and more like each other in a process called convergence. However, the automaticity and grain size of convergence are not well established. This study therefore examined whether female native Dutch speakers converge to large yet sub-phonemic shifts in the F2 of the vowel /e/. Participants first performed a short reading task to establish baseline F2s for the vowel /e/, then shadowed 120 target words (alongside 360 fillers) which contained one instance of a manipulated vowel /e/ where the F2 had been shifted down to that of the vowel /ø/. Consistent exposure to large (sub-phonemic) downward shifts in F2 did not result in convergence. The results raise issues for theories which view convergence as a product of automatic integration between perception and production.
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.
  • Uzbas, F., & O’Neill, A. (2023). Spatial Centrosome Proteomic Profiling of Human iPSC-derived Neural Cells. BIO-PROTOCOL, 13(17): e4812. doi:10.21769/BioProtoc.4812.

    Abstract

    The centrosome governs many pan-cellular processes including cell division, migration, and cilium formation.
    However, very little is known about its cell type-specific protein composition and the sub-organellar domains where
    these protein interactions take place. Here, we outline a protocol for the spatial interrogation of the centrosome
    proteome in human cells, such as those differentiated from induced pluripotent stem cells (iPSCs), through co-
    immunoprecipitation of protein complexes around selected baits that are known to reside at different structural parts
    of the centrosome, followed by mass spectrometry. The protocol describes expansion and differentiation of human
    iPSCs to dorsal forebrain neural progenitors and cortical projection neurons, harvesting and lysis of cells for protein
    isolation, co-immunoprecipitation with antibodies against selected bait proteins, preparation for mass spectrometry,
    processing the mass spectrometry output files using MaxQuant software, and statistical analysis using Perseus
    software to identify the enriched proteins by each bait. Given the large number of cells needed for the isolation of
    centrosome proteins, this protocol can be scaled up or down by modifying the number of bait proteins and can also
    be carried out in batches. It can potentially be adapted for other cell types, organelles, and species as well.
  • van der Burght, C. L., Numssen, O., Schlaak, B., Goucha, T., & Hartwigsen, G. (2023). Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Human Brain Mapping, 44(2), 585-598. doi:10.1002/hbm.26086.

    Abstract

    Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation and its functional relevance has yet to be confirmed. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electrical field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.

    Additional information

    supplementary information
  • Van Hoey, T., Thompson, A. L., Do, Y., & Dingemanse, M. (2023). Iconicity in ideophones: Guessing, memorizing, and reassessing. Cognitive Science, 47(4): e13268. doi:10.1111/cogs.13268.

    Abstract

    Iconicity, or the resemblance between form and meaning, is often ascribed to a special status and contrasted with default assumptions of arbitrariness in spoken language. But does iconicity in spoken language have a special status when it comes to learnability? A simple way to gauge learnability is to see how well something is retrieved from memory. We can further contrast this with guessability, to see (1) whether the ease of guessing the meanings of ideophones outperforms the rate at which they are remembered; and (2) how willing participants’ are to reassess what they were taught in a prior task—a novel contribution of this study. We replicate prior guessing and memory tasks using ideophones and adjectives from Japanese, Korean, and Igbo. Our results show that although native Cantonese speakers guessed ideophone meanings above chance level, they memorized both ideophones and adjectives with comparable accuracy. However, response time data show that participants took significantly longer to respond correctly to adjective–meaning pairs—indicating a discrepancy in a cognitive effort that favored the recognition of ideophones. In a follow-up reassessment task, participants who were taught foil translations were more likely to choose the true translations for ideophones rather than adjectives. By comparing the findings from our guessing and memory tasks, we conclude that iconicity is more accessible if a task requires participants to actively seek out sound-meaning associations.
  • Van Wonderen, E., & Nieuwland, M. S. (2023). Lexical prediction does not rationally adapt to prediction error: ERP evidence from pre-nominal articles. Journal of Memory and Language, 132: 104435. doi:10.1016/j.jml.2023.104435.

    Abstract

    People sometimes predict upcoming words during language comprehension, but debate remains on when and to what extent such predictions indeed occur. The rational adaptation hypothesis holds that predictions develop with expected utility: people predict more strongly when predictions are frequently confirmed (low prediction error) rather than disconfirmed. However, supporting evidence is mixed thus far and has only involved measuring responses to supposedly predicted nouns, not to preceding articles that may also be predicted. The current, large-sample (N = 200) ERP study on written discourse comprehension in Dutch therefore employs the well-known ‘pre-nominal prediction effect’: enhanced N400-like ERPs for articles that are unexpected given a likely upcoming noun’s gender (i.e., the neuter gender article ‘het’ when people expect the common gender noun phrase ‘de krant’, the newspaper) compared to expected articles. We investigated whether the pre-nominal prediction effect is larger when most of the presented stories contain predictable article-noun combinations (75% predictable, 25% unpredictable) compared to when most stories contain unpredictable combinations (25% predictable, 75% unpredictable). Our results show the pre-nominal prediction effect in both contexts, with little evidence to suggest that this effect depended on the percentage of predictable combinations. Moreover, the little evidence suggesting such a dependence was primarily observed for unexpected, neuter-gender articles (‘het’), which is inconsistent with the rational adaptation hypothesis. In line with recent demonstrations (Nieuwland, 2021a,b), our results suggest that linguistic prediction is less ‘rational’ or Bayes optimal than is often suggested.
  • Van Leeuwen, E. J. C., & Haun, D. B. M. (2013). Conformity in nonhuman primates: Fad or fact? Evolution and Human Behavior, 34, 1-7. doi:10.1016/j.evolhumbehav.2012.07.005.

    Abstract

    Majority influences have long been a subject of great interest for social psychologists and, more recently, for researchers investigating social influences in nonhuman primates. Although this empirical endeavor has culminated in the conclusion that some ape and monkey species show “conformist” tendencies, the current approach seems to suffer from two fundamental limitations: (a) majority influences have not been operationalized in accord with any of the existing definitions, thereby compromising the validity of cross-species comparisons, and (b) the results have not been systematically scrutinized in light of alternative explanations. In this review, we aim to address these limitations theoretically. First, we will demonstrate how the experimental designs used in nonhuman primate studies cannot test for conformity unambiguously and address alternative explanations and potential confounds for the presented results in the form of primacy effects, frequency exposure, and perception ambiguity. Second, we will show how majority influences have been defined differently across disciplines and, therefore, propose a set of definitions in order to streamline majority influence research, where conformist transmission and conformity will be put forth as operationalizations of the overarching denominator majority influences. Finally, we conclude with suggestions to foster the study of majority influences by clarifying the empirical scope of each proposed definition, exploring compatible research designs and highlighting how majority influences are inherently contingent on situational trade-offs.
  • Van Leeuwen, E. J. C., Cronin, K. A., Schütte, S., Call, J., & Haun, D. B. M. (2013). Chimpanzees flexibly adjust their behaviour in order to maximize payoffs, not to conform to majorities. PLoS One, 8(11): e80945. doi:10.1371/journal.pone.0080945.

    Abstract

    Chimpanzees have been shown to be adept learners, both individually and socially. Yet, sometimes their conservative nature seems to hamper the flexible adoption of superior alternatives, even to the extent that they persist in using entirely ineffective strategies. In this study, we investigated chimpanzees’ behavioural flexibility in two different conditions under which social animals have been predicted to abandon personal preferences and adopt alternative strategies: i) under influence of majority demonstrations (i.e. conformity), and ii) in the presence of superior reward contingencies (i.e. maximizing payoffs). Unlike previous nonhuman primate studies, this study disentangled the concept of conformity from the tendency to maintain one’s first-learned strategy. Studying captive (n=16) and semi-wild (n=12) chimpanzees in two complementary exchange paradigms, we found that chimpanzees did not abandon their behaviour in order to match the majority, but instead remained faithful to their first-learned strategy (Study 1a and 1b). However, the chimpanzees’ fidelity to their first-learned strategy was overridden by an experimental upgrade of the profitability of the alternative strategy (Study 2). We interpret our observations in terms of chimpanzees’ relative weighing of behavioural options as a function of situation-specific trade-offs. More specifically, contrary to previous findings, chimpanzees in our study abandoned their familiar behaviour to maximize payoffs, but not to conform to a majority.
  • Van Beek, G., Flecken, M., & Starren, M. (2013). Aspectual perspective taking in event construal in L1 and L2 Dutch. International review of applied linguistics, 51(2), 199-227. doi:10.1515/iral-2013-0009.

    Abstract

    The present study focuses on the role of grammatical aspect in event construal and its function in encoding the specificity of an event. We investigate whether advanced L2 learners (L1 German) acquire target-like patterns of use of progressive aspect in Dutch, a language in which use of aspect depends on specific situation types. We analyze use of progressive markers and patterns in information selection, relating to specific features of agents or actions in dynamic event scenes. L2 event descriptions are compared with L1 Dutch and L1 German data. The L2 users display the complex situation-dependent patterns of use of aspect in Dutch, but they do not select the aspectual viewpoint (aan he construction) to the same extent as native speakers. Moreover, the encoding of specificity of the events (mentioning of specific agent and action features) reflects L1 transfer, as well as target-like performance in specific domains.
  • Van Putten, S. (2013). [Review of the book The expression of information structure. A documentation of its diversity across Africa, ed. by Ines Fiedler and Anne Schwarz]. Journal of African Languages and Linguistics, 34, 183 -186. doi:10.1515/jall-2013-0005.

    Abstract

    This volume contains 13 papers dealing with various aspects of information structure in a wide variety of African languages. They form the proceedings of a workshop organized by the Collaborative Research Center on Information Structure (University of Potsdam and Humboldt University, Berlin). In the introduction, the editors define the main contribution of this volume in terms of “the spectrum of information-structural notions and phenomena discussed, the investigation of information structure in several relatively unfamiliar languages and the genealogical width of the African languages studied.” (vii–viii emphasis added). In this sense it complements the previous volume on information structure in African languages published by the Collaborative Research Center and the University of Amsterdam (Aboh, Hartmann & Zimmermann, 2007), which was more theoryoriented.
  • Van der Zande, P. (2013). Hearing and seeing speech: Perceptual adjustments in auditory-visual speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van der Valk, R. J. P., Duijts, L., Timpson, N. J., Salam, M. T., Standl, M., Curtin, J. A., Genuneit, J., Kerhof, M., Kreiner-Møller, E., Cáceres, A., Gref, A., Liang, L. L., Taal, H. R., Bouzigon, E., Demenais, F., Nadif, R., Ober, C., Thompson, E. E., Estrada, K., Hofman, A. and 39 moreVan der Valk, R. J. P., Duijts, L., Timpson, N. J., Salam, M. T., Standl, M., Curtin, J. A., Genuneit, J., Kerhof, M., Kreiner-Møller, E., Cáceres, A., Gref, A., Liang, L. L., Taal, H. R., Bouzigon, E., Demenais, F., Nadif, R., Ober, C., Thompson, E. E., Estrada, K., Hofman, A., Uitterlinden, A. G., van Duijn, C., Rivadeneira, F., Li, X., Eckel, S. P., Berhane, K., Gauderman, W. J., Granell, R., Evans, D. M., St Pourcain, B., McArdle, W., Kemp, J. P., Smith, G. D., Tiesler, C. M. T., Flexeder, C., Simpson, A., Murray, C. S., Fuchs, O., Postma, D. S., Bønnelykke, K., Torrent, M., Andersson, M., Sleiman, P., Hakonarson, H., Cookson, W. O., Moffatt, M. F., Paternoster, L., Melén, E., Sunyer, J., Bisgaard, H., Koppelman, G. H., Ege, M., Custovic, A., Heinrich, J., Gilliland, F. D., Henderson, A. J., Jaddoe, V. W. V., de Jongste, J. C., & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Fraction of exhaled nitric oxide values in childhood are associated with 17q11.2-q12 and 17q12-q21 variants. Journal of Allergy and Clinical Immunology, 134(1), 46-55. doi:10.1016/j.jaci.2013.08.053.

    Abstract

    BACKGROUND: The fraction of exhaled nitric oxide (Feno) value is a biomarker of eosinophilic airway inflammation and is associated with childhood asthma. Identification of common genetic variants associated with childhood Feno values might help to define biological mechanisms related to specific asthma phenotypes.
    OBJECTIVE: We sought to identify the genetic variants associated with childhood Feno values and their relation with asthma.
    METHODS: Feno values were measured in children age 5 to 15 years. In 14 genome-wide association studies (N = 8,858), we examined the associations of approximately 2.5 million single nucleotide polymorphisms (SNPs) with Feno values. Subsequently, we assessed whether significant SNPs were expression quantitative trait loci in genome-wide expression data sets of lymphoblastoid cell lines (n = 1,830) and were related to asthma in a previously published genome-wide association data set (cases, n = 10,365; control subjects: n = 16,110).
    RESULTS: We identified 3 SNPs associated with Feno values: rs3751972 in LYR motif containing 9 (LYRM9; P = 1.97 × 10(-10)) and rs944722 in inducible nitric oxide synthase 2 (NOS2; P = 1.28 × 10(-9)), both of which are located at 17q11.2-q12, and rs8069176 near gasdermin B (GSDMB; P = 1.88 × 10(-8)) at 17q12-q21. We found a cis expression quantitative trait locus for the transcript soluble galactoside-binding lectin 9 (LGALS9) that is in linkage disequilibrium with rs944722. rs8069176 was associated with GSDMB and ORM1-like 3 (ORMDL3) expression. rs8069176 at 17q12-q21, but not rs3751972 and rs944722 at 17q11.2-q12, were associated with physician-diagnosed asthma.
    CONCLUSION: This study identified 3 variants associated with Feno values, explaining 0.95% of the variance. Identification of functional SNPs and haplotypes in these regions might provide novel insight into the regulation of Feno values. This study highlights that both shared and distinct genetic factors affect Feno values and childhood asthma.
  • Van Stekelenburg, J., Anikina, N. C., Pouw, W., Petrovic, I., & Nederlof, N. (2013). From correlation to causation: The cruciality of a collectivity in the context of collective action. Journal of Social and Political Psychology, 1(1), 161-187. doi:10.5964/jspp.v1i1.38.

    Abstract

    This paper discusses a longitudinal field study on collective action which aims to move beyond student samples and enhance mundane realism. First we provide a historical overview of the literature on the what (i.e., antecedents of collective action) and the how (i.e., the methods employed) of the social psychology of protest. This historical overview is substantiated with meta-analytical evidence on how these antecedents and methods changed over time. After the historical overview, we provide an empirical illustration of a longitudinal field study in a natural setting―a newly-built Dutch neighbourhood. We assessed changes in informal embeddedness, efficacy, identification, emotions, and grievances over time. Between t0 and t1 the residents protested against the plan to allow a mosque to carrying out their services in a community building in the neighbourhood. We examined the antecedents of protest before [t0] and after [t1] the protests, and whether residents participated or not. We show how a larger social network functions as a catalyst in steering protest participation. Our longitudinal field study replicates basic findings from experimental and survey research. However, it also shows that one antecedent in particular, which is hard to manipulate in the lab (i.e., the size of someone’s social network), proved to be of great importance. We suggest that in overcoming our most pertinent challenge―causality―we should not only remain in our laboratories but also go out and examine real-life situations with people situated in real-life social networks.
  • Van Berkum, J. J. A., De Goede, D., Van Alphen, P. M., Mulder, E. R., & Kerstholt, J. H. (2013). How robust is the language architecture? The case of mood. Frontiers in Psychology, 4: 505. doi:10.3389/fpsyg.2013.00505.

    Abstract

    In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that “David praised Linda because … ” would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., “The boys turns”). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying “cold” cognitive functions in relation to “hot” aspects of the brain.
  • Van Valin Jr., R. D. (2013). Head-marking languages and linguistic theory. In B. Bickel, L. A. Grenoble, D. A. Peterson, & A. Timberlake (Eds.), Language typology and historical contingency: In honor of Johanna Nichols (pp. 91-124). Amsterdam: Benjamins.

    Abstract

    In her path-breaking 1986 paper, Johanna Nichols proposed a typological contrast between head-marking and dependent-marking languages. Nichols argues that even though the syntactic relations between the head and its dependents are the same in both types of language, the syntactic “bond” between them is not the same; in dependent-marking languages it is one of government, whereas in head-marking languages it is one of apposition. This distinction raises an important question for linguistic theory: How can this contrast – government versus apposition – which can show up in all of the major phrasal types in a language, be captured? The purpose of this paper is to explore the various approaches that have been taken in an attempt to capture the difference between head-marked and dependent-marked syntax in different linguistic theories. The basic problem that head-marking languages pose for syntactic theory will be presented, and then generative approaches will be discussed. The analysis of head-marked structure in Role and Reference Grammar will be presented
  • Van Valin Jr., R. D. (2013). Lexical representation, co-composition, and linking syntax and semantics. In J. Pustejovsky, P. Bouillon, H. Isahara, K. Kanzaki, & C. Lee (Eds.), Advances in generative lexicon theory (pp. 67-107). Dordrecht: Springer.
  • Van der Zande, P., Jesse, A., & Cutler, A. (2013). Lexically guided retuning of visual phonetic categories. Journal of the Acoustical Society of America, 134, 562-571. doi:10.1121/1.4807814.

    Abstract

    Listeners retune the boundaries between phonetic categories to adjust to individual speakers' productions. Lexical information, for example, indicates what an unusual sound is supposed to be, and boundary retuning then enables the speaker's sound to be included in the appropriate auditory phonetic category. In this study, it was investigated whether lexical knowledge that is known to guide the retuning of auditory phonetic categories, can also retune visual phonetic categories. In Experiment 1, exposure to a visual idiosyncrasy in ambiguous audiovisually presented target words in a lexical decision task indeed resulted in retuning of the visual category boundary based on the disambiguating lexical context. In Experiment 2 it was tested whether lexical information retunes visual categories directly, or indirectly through the generalization from retuned auditory phonetic categories. Here, participants were exposed to auditory-only versions of the same ambiguous target words as in Experiment 1. Auditory phonetic categories were retuned by lexical knowledge, but no shifts were observed for the visual phonetic categories. Lexical knowledge can therefore guide retuning of visual phonetic categories, but lexically guided retuning of auditory phonetic categories is not generalized to visual categories. Rather, listeners adjust auditory and visual phonetic categories to talker idiosyncrasies separately.
  • Van Leeuwen, T. M., Hagoort, P., & Händel, B. F. (2013). Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls. Neuropsychologia, 51(10), 1802-1813. doi:10.1016/j.neuropsychologia.2013.06.024.

    Abstract

    Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10 Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of synesthesia.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Van der Werf, O. J., Schuhmann, T., De Graaf, T., Ten Oever, S., & Sack, A. T. (2023). Investigating the role of task relevance during rhythmic sampling of spatial locations. Scientific Reports, 13: 12707. doi:10.1038/s41598-023-38968-z.

    Abstract

    Recently it has been discovered that visuospatial attention operates rhythmically, rather than being stably employed over time. A low-frequency 7–8 Hz rhythmic mechanism coordinates periodic windows to sample relevant locations and to shift towards other, less relevant locations in a visual scene. Rhythmic sampling theories would predict that when two locations are relevant 8 Hz sampling mechanisms split into two, effectively resulting in a 4 Hz sampling frequency at each location. Therefore, it is expected that rhythmic sampling is influenced by the relative importance of locations for the task at hand. To test this, we employed an orienting task with an arrow cue, where participants were asked to respond to a target presented in one visual field. The cue-to-target interval was systematically varied, allowing us to assess whether performance follows a rhythmic pattern across cue-to-target delays. We manipulated a location’s task relevance by altering the validity of the cue, thereby predicting the correct location in 60%, 80% or 100% of trials. Results revealed significant 4 Hz performance fluctuations at cued right visual field targets with low cue validity (60%), suggesting regular sampling of both locations. With high cue validity (80%), we observed a peak at 8 Hz towards non-cued targets, although not significant. These results were in line with our hypothesis suggesting a goal-directed balancing of attentional sampling (cued location) and shifting (non-cued location) depending on the relevance of locations in a visual scene. However, considering the hemifield specificity of the effect together with the absence of expected effects for cued trials in the high valid conditions we further discuss the interpretation of the data.

    Additional information

    supplementary information
  • van der Burght, C. L., Friederici, A. D., Maran, M., Papitto, G., Pyatigorskaya, E., Schroen, J., Trettenbrein, P., & Zaccarella, E. (2023). Cleaning up the brickyard: How theory and methodology shape experiments in cognitive neuroscience of language. Journal of Cognitive Neuroscience, 35(12), 2067-2088. doi:10.1162/jocn_a_02058.

    Abstract

    The capacity for language is a defining property of our species, yet despite decades of research evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement amongst cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modelling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
  • Vaughn, C., & Brouwer, S. (2013). Perceptual integration of indexical information in bilingual speech. Proceedings of Meetings on Acoustics, 19: 060208. doi:10.1121/1.4800264.

    Abstract

    The present research examines how different types of indexical information, namely talker information and the language being spoken, are perceptually integrated in bilingual speech. Using a speeded classification paradigm (Garner, 1974), variability in characteristics of the talker (gender in Experiment 1 and specific talker in Experiment 2) and in the language being spoken (Mandarin vs. English) was manipulated. Listeners from two different language backgrounds, English monolinguals and Mandarin-English bilinguals, were asked to classify short, meaningful sentences obtained from different Mandarin-English bilingual talkers on these indexical dimensions. Results for the gender-language classification (Exp. 1) showed a significant, symmetrical interference effect for both listener groups, indicating that gender information and language are processed in an integral manner. For talker-language classification (Exp. 2), language interfered more with talker than vice versa for the English monolinguals, but symmetrical interference was found for the Mandarin-English bilinguals. These results suggest both that talker-specificity is not fully segregated from language-specificity, and that bilinguals exhibit more balanced classification along various indexical dimensions of speech. Currently, follow-up studies investigate this talker-language dependency for bilingual listeners who do not speak Mandarin in order to disentangle the role of bilingualism versus language familiarity.
  • Verdonschot, R. G., La Heij, W., Tamaoka, K., Kiyama, S., You, W.-P., & Schiller, N. O. (2013). The multiple pronunciations of Japanese kanji: A masked priming investigation. Quarterly Journal of Experimental Psychology, 66(10), 2023-2038. doi:10.1080/17470218.2013.773050.

    Abstract

    English words with an inconsistent grapheme-to-phoneme conversion or with more than one pronunciation (homographic heterophones; e.g., lead-/l epsilon d/, /lid/) are read aloud more slowly than matched controls, presumably due to competition processes. In Japanese kanji, the majority of the characters have multiple readings for the same orthographic unit: the native Japanese reading (KUN) and the derived Chinese reading (ON). This leads to the question of whether reading these characters also shows processing costs. Studies examining this issue have provided mixed evidence. The current study addressed the question of whether processing of these kanji characters leads to the simultaneous activation of their KUN and ON reading, This was measured in a direct way in a masked priming paradigm. In addition, we assessed whether the relative frequencies of the KUN and ON pronunciations (dominance ratio, measured in compound words) affect the amount of priming. The results of two experiments showed that: (a) a single kanji, presented as a masked prime, facilitates the reading of the (katakana transcriptions of) their KUN and ON pronunciations; however, (b) this was most consistently found when the dominance ratio was around 50% (no strong dominance towards either pronunciation) and when the dominance was towards the ON reading (high-ON group). When the dominance was towards the KUN reading (high-KUN group), no significant priming for the ON reading was observed. Implications for models of kanji processing are discussed.
  • Verdonschot, R. G., Nakayama, M., Zhang, Q., Tamaoka, K., & Schiller, N. O. (2013). The proximate phonological unit of Chinese-English bilinguals: Proficiency matters. PLoS One, 8(4): e61454. doi:10.1371/journal.pone.0061454.

    Abstract

    An essential step to create phonology according to the language production model by Levelt, Roelofs and Meyer is to assemble phonemes into a metrical frame. However, recently, it has been proposed that different languages may rely on different grain sizes of phonological units to construct phonology. For instance, it has been proposed that, instead of phonemes, Mandarin Chinese uses syllables and Japanese uses moras to fill the metrical frame. In this study, we used a masked priming-naming task to investigate how bilinguals assemble their phonology for each language when the two languages differ in grain size. Highly proficient Mandarin Chinese-English bilinguals showed a significant masked onset priming effect in English (L2), and a significant masked syllabic priming effect in Mandarin Chinese (L1). These results suggest that their proximate unit is phonemic in L2 (English), and that bilinguals may use different phonological units depending on the language that is being processed. Additionally, under some conditions, a significant sub-syllabic priming effect was observed even in Mandarin Chinese, which indicates that L2 phonology exerts influences on L1 target processing as a consequence of having a good command of English.

    Additional information

    English stimuli Chinese stimuli
  • Verga, L., D’Este, G., Cassani, S., Leitner, C., Kotz, S. A., Ferini-Strambi, L., & Galbiati, A. (2023). Sleeping with time in mind? A literature review and a proposal for a screening questionnaire on self-awakening. PLoS One, 18(3): e0283221. doi:10.1371/journal.pone.0283221.

    Abstract

    Some people report being able to spontaneously “time” the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.
  • Verga, L., & Kotz, S. A. (2013). How relevant is social interaction in second language learning? Frontiers in Human Neuroscience, 7: 550. doi:10.3389/fnhum.2013.00550.

    Abstract

    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.
  • Verga, L., Kotz, S. A., & Ravignani, A. (2023). The evolution of social timing. Physics of Life Reviews, 46, 131-151. doi:10.1016/j.plrev.2023.06.006.

    Abstract

    Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
  • Verga, L., Schwartze, M., & Kotz, S. A. (2023). Neurophysiology of language pathologies. In M. Grimaldi, E. Brattico, & Y. Shtyrov (Eds.), Language Electrified: Neuromethods (pp. 753-776). New York, NY: Springer US. doi:10.1007/978-1-0716-3263-5_24.

    Abstract

    Language- and speech-related disorders are among the most frequent consequences of developmental and acquired pathologies. While classical approaches to the study of these disorders typically employed the lesion method to unveil one-to-one correspondence between locations, the extent of the brain damage, and corresponding symptoms, recent advances advocate the use of online methods of investigation. For example, the use of electrophysiology or magnetoencephalography—especially when combined with anatomical measures—allows for in vivo tracking of real-time language and speech events, and thus represents a particularly promising venue for future research targeting rehabilitative interventions. In this chapter, we provide a comprehensive overview of language and speech pathologies arising from cortical and/or subcortical damage, and their corresponding neurophysiological and pathological symptoms. Building upon the reviewed evidence and literature, we aim at providing a description of how the neurophysiology of the language network changes as a result of brain damage. We will conclude by summarizing the evidence presented in this chapter, while suggesting directions for future research.
  • Verhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T. and 90 moreVerhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T., Kähönen, M., Paterson, A. D., Hosseini, S. M., Wong, H. S., Xu, L., Jonas, J. B., Pärssinen, O., Wedenoja, J., Yip, S. P., Ho, D. W. H., Pang, C. P., Chen, L. J., Burdon, K. P., Craig, J. E., Klein, B. E. K., Klein, R., Haller, T., Metspalu, A., Khor, C.-C., Tai, E.-S., Aung, T., Vithana, E., Tay, W.-T., Barathi, V. A., Chen, P., Li, R., Liao, J., Zheng, Y., Ong, R. T., Döring, A., Evans, D. M., Timpson, N. J., Verkerk, A. J. M. H., Meitinger, T., Raitakari, O., Hawthorne, F., Spector, T. D., Karssen, L. C., Pirastu, M., Murgia, F., Ang, W., Mishra, A., Montgomery, G. W., Pennell, C. E., Cumberland, P. M., Cotlarciuc, I., Mitchell, P., Wang, J. J., Schache, M., Janmahasatian, S., Janmahasathian, S., Igo, R. P., Lass, J. H., Chew, E., Iyengar, S. K., Gorgels, T. G. M. F., Rudan, I., Hayward, C., Wright, A. F., Polasek, O., Vatavuk, Z., Wilson, J. F., Fleck, B., Zeller, T., Mirshahi, A., Müller, C., Uitterlinden, A. G., Rivadeneira, F., Vingerling, J. R., Hofman, A., Oostra, B. A., Amin, N., Bergen, A. A. B., Teo, Y.-Y., Rahi, J. S., Vitart, V., Williams, C., Baird, P. N., Wong, T.-Y., Oexle, K., Pfeiffer, N., Mackey, D. A., Young, T. L., van Duijn, C. M., Saw, S.-M., Bailey-Wilson, J. E., Stambolian, D., Klaver, C. C., Hammond, C. J., Consortium for Refractive Error and Myopia (CREAM), The Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Research Group, Wellcome Trust Case Control Consortium 2 (WTCCC2), & The Fuchs' Genetics Multi-Center Study Group (2013). Genome-wide meta-analyses of multiancestry cohorts identify multiple new susceptibility loci for refractive error and myopia. Nature Genetics, 45(3), 314-318. doi:10.1038/ng.2554.

    Abstract

    Refractive error is the most common eye disorder worldwide and is a prominent cause of blindness. Myopia affects over 30% of Western populations and up to 80% of Asians. The CREAM consortium conducted genome-wide meta-analyses, including 37,382 individuals from 27 studies of European ancestry and 8,376 from 5 Asian cohorts. We identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. Combined analysis identified 8 additional associated loci. The new loci include candidate genes with functions in neurotransmission (GRIA4), ion transport (KCNQ5), retinoic acid metabolism (RDH5), extracellular matrix remodeling (LAMA2 and BMP2) and eye development (SIX6 and PRSS56). We also confirmed previously reported associations with GJD2 and RASGRF1. Risk score analysis using associated SNPs showed a tenfold increased risk of myopia for individuals carrying the highest genetic load. Our results, based on a large meta-analysis across independent multiancestry studies, considerably advance understanding of the mechanisms involved in refractive error and myopia.
  • Verkerk, A., & Frostad, B. H. (2013). The encoding of manner predications and resultatives in Oceanic: A typological and historical overview. Oceanic Linguistics, 52, 1-35. doi:10.1353/ol.2013.0010.

    Abstract

    This paper is concerned with the encoding of resultatives and manner predications in Oceanic languages. Our point of departure is a typological overview of the encoding strategies and their geographical distribution, and we investigate their historical traits by the use of phylogenetic comparative methods. A full theory of the historical pathways is not always accessible for all the attested encoding strategies, given the data available for this study. However, tentative theories about the development and origin of the attested strategies are given. One of the most frequent strategy types used to encode both manner predications and resultatives has been given special emphasis. This is a construction in which a reex form of the Proto-Oceanic causative *pa-/*paka- modies the second verb in serial verb constructions

    Additional information

    52.1.verkerk_supp01.pdf
  • Verkerk, A. (2013). Scramble, scurry and dash: The correlation between motion event encoding and manner verb lexicon size in Indo-European. Language Dynamics and Change, 3, 169-217. doi:10.1163/22105832-13030202.

    Abstract

    In recent decades, much has been discovered about the different ways in which people can talk about motion (Talmy, 1985, 1991; Slobin, 1996, 1997, 2004). Slobin (1997) has suggested that satellite-framed languages typically have a larger and more diverse lexicon of manner of motion verbs (such as run, fly, and scramble) when compared to verb-framed languages. Slobin (2004) has claimed that larger manner of motion verb lexicons originate over time because codability factors increase the accessibility of manner in satellite-framed languages. In this paper I investigate the dependency between the use of the satellite-framed encoding construction and the size of the manner verb lexicon. The data used come from 20 Indo-European languages. The methodology applied is a range of phylogenetic comparative methods adopted from biology, which allow for an investigation of this dependency while taking into account the shared history between these 20 languages. The results provide evidence that Slobin’s hypothesis was correct, and indeed there seems to be a relationship between the use of the satellite-framed construction and the size of the manner verb lexicon
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong
    support for a genetic basis. However, the underlying genetic architecture is complex,
    involving multiple risk factors. This chapter begins by discussing genetic loci associated
    with common multifactorial language-related impairments and goes on to
    detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic
    speech and language disorder. Although FOXP2 was initially uncovered in
    humans, model systems have been invaluable in progressing our understanding of
    the function of this gene and its associated pathways in language-related areas of the
    brain. Research in species from mouse to songbird has revealed effects of this gene
    on relevant behaviours including acquisition of motor skills and learned vocalisations
    and demonstrated a role for Foxp2 in neuronal connectivity and signalling,
    particularly in the striatum. Animal models have also facilitated the identification of
    wider neurogenetic networks thought to be involved in language development and
    disorder and allowed the investigation of new candidate genes for disorders involving
    language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises
    to yield new insights into the genetic and neural mechanisms underlying human
    speech and language
  • Vessel, E. A., Pasqualette, L., Uran, C., Koldehoff, S., Bignardi, G., & Vinck, M. (2023). Self-relevance predicts the aesthetic appeal of real and synthetic artworks generated via neural style transfer. Psychological Science, 34(9), 1007-1023. doi:10.1177/09567976231188107.

    Abstract

    What determines the aesthetic appeal of artworks? Recent work suggests that aesthetic appeal can, to some extent, be predicted from a visual artwork’s image features. Yet a large fraction of variance in aesthetic ratings remains unexplained and may relate to individual preferences. We hypothesized that an artwork’s aesthetic appeal depends strongly on self-relevance. In a first study (N = 33 adults, online replication N = 208), rated aesthetic appeal for real artworks was positively predicted by rated self-relevance. In a second experiment (N = 45 online), we created synthetic, self-relevant artworks using deep neural networks that transferred the style of existing artworks to photographs. Style transfer was applied to self-relevant photographs selected to reflect participant-specific attributes such as autobiographical memories. Self-relevant, synthetic artworks were rated as more aesthetically appealing than matched control images, at a level similar to human-made artworks. Thus, self-relevance is a key determinant of aesthetic appeal, independent of artistic skill and image features.

    Additional information

    supplementary materials
  • Vingerhoets, G., Verhelst, H., Gerrits, R., Badcock, N., Bishop, D. V. M., Carey, D., Flindall, J., Grimshaw, G., Harris, L. J., Hausmann, M., Hirnstein, M., Jäncke, L., Joliot, M., Specht, K., Westerhausen, R., & LICI consortium (2023). Laterality indices consensus initiative (LICI): A Delphi expert survey report on recommendations to record, assess, and report asymmetry in human behavioural and brain research. Laterality, 28(2-3), 122-191. doi:10.1080/1357650X.2023.2199963.

    Abstract

    Laterality indices (LIs) quantify the left-right asymmetry of brain and behavioural variables and provide a measure that is statistically convenient and seemingly easy to interpret. Substantial variability in how structural and functional asymmetries are recorded, calculated, and reported, however, suggest little agreement on the conditions required for its valid assessment. The present study aimed for consensus on general aspects in this context of laterality research, and more specifically within a particular method or technique (i.e., dichotic listening, visual half-field technique, performance asymmetries, preference bias reports, electrophysiological recording, functional MRI, structural MRI, and functional transcranial Doppler sonography). Experts in laterality research were invited to participate in an online Delphi survey to evaluate consensus and stimulate discussion. In Round 0, 106 experts generated 453 statements on what they considered good practice in their field of expertise. Statements were organised into a 295-statement survey that the experts then were asked, in Round 1, to independently assess for importance and support, which further reduced the survey to 241 statements that were presented again to the experts in Round 2. Based on the Round 2 input, we present a set of critically reviewed key recommendations to record, assess, and report laterality research for various methods.

    Files private

    Request files
  • Vogel, C., Koutsombogera, M., Murat, A. C., Khosrobeigi, Z., & Ma, X. (2023). Gestural linguistic context vectors encode gesture meaning. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527176.

    Abstract

    Linguistic context vectors are adapted for measuring the linguistic contexts that accompany gestures and comparable co-linguistic behaviours. Focusing on gestural semiotic types, it is demonstrated that gestural linguistic context vectors carry information associated with gesture. It is suggested that these may be used to approximate gesture meaning in a similar manner to the approximation of word meaning by context vectors.
  • von Stutterheim, C., Flecken, M., & Carroll, M. (2013). Introduction: Conceptualizing in a second language. International Review of Applied Linguistics in Language Teaching, 51(2), 77-85. doi:10.1515/iral-2013-0004.
  • von Stutterheim, C., & Flecken, M. (Eds.). (2013). Principles of information organization in L2 discourse [Special Issue]. International Review of Applied linguistics in Language Teaching (IRAL), 51(2).
  • De Vos, C. (2013). Sign-Spatiality in Kata Kolok: How a village sign language of Bali inscribes its signing space [Dissertation abstract]. Sign Language & Linguistics, 16(2), 277-284. doi:10.1075/sll.16.2.08vos.
  • De Vriend, F., Broeder, D., Depoorter, G., van Eerten, L., & Van Uytvanck, D. (2013). Creating & Testing CLARIN Metadata Components. Language Resources and Evaluation, 47(4), 1315-1326. doi:10.1007/s10579-013-9231-6.

    Abstract

    The CLARIN Metadata Infrastructure (CMDI) that is being developed in Common Language Resources and Technology Infrastructure (CLARIN) is a computer-supported framework that combines a flexible component approach with the explicit declaration of semantics. The goal of the Dutch CLARIN project “Creating & Testing CLARIN Metadata Components” was to create metadata components and profiles for a wide variety of existing resources housed at two data centres according to the CMDI specifications. In doing so the principles of the framework were tested. The results of the project are of benefit to other CLARIN-projects that are expected to adhere to the CMDI framework and its accompanying tools.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., & Verhoeven, L. (2013). The role of lexical representations and phonological overlap in rhyme judgments of beginning, intermediate and advanced readers. Learning and Individual Differences, 23, 64-71. doi:10.1016/j.lindif.2012.09.007.

    Abstract

    Studies have shown that prereaders find globally similar non-rhyming pairs (i.e., bell–ball) difficult to judge. Although this effect has been explained as a result of ill-defined lexical representations, others have suggested that it is part of an innate tendency to respond to phonological overlap. In the present study we examined this effect over time. Beginning, intermediate and advanced readers were presented with a rhyme judgment task containing rhyming, phonologically similar, and unrelated non-rhyming pairs. To examine the role of lexical representations, participants were presented with both words and pseudowords. Outcomes showed that pseudoword processing was difficult for children but not for adults. The global similarity effect was present in both children and adults. The findings imply that holistic representations cannot explain the incapacity to ignore similarity relations during rhyming. Instead, the data provide more evidence for the idea that global similarity processing is part of a more fundamental innate phonological processing capacity.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., Hagoort, P., & Verhoeven, L. (2013). The neural correlates of rhyme awareness in preliterate and literate children. Clinical Neurophysiology, 124, 1336-1345. doi:10.1016/j.clinph.2013.01.022.

    Abstract

    Objective Most rhyme awareness assessments do not encompass measures of the global similarity effect (i.e., children who are able to perform simple rhyme judgments get confused when presented with globally similar non-rhyming pairs). The present study examines the neural nature of this effect by studying the N450 rhyme effect. Methods Behavioral and electrophysiological responses of Dutch pre-literate kindergartners and literate second graders were recorded while they made rhyme judgments of word pairs in three conditions; phonologically rhyming (e.g., wijn-pijn), overlapping non-rhyming (e.g., pen-pijn) and unrelated non-rhyming pairs (e.g., boom-pijn). Results Behaviorally, both groups had difficulty judging overlapping but not rhyming and unrelated pairs. The neural data of second graders showed overlapping pairs were processed in a similar fashion as unrelated pairs; both showed a more negative deflection of the N450 component than rhyming items. Kindergartners did not show a typical N450 rhyme effect. However, some other interesting ERP differences were observed, indicating preliterates are sensitive to rhyme at a certain level. Significance Rhyme judgments of globally similar items rely on the same process as rhyme judgments of rhyming and unrelated items. Therefore, incorporating a globally similar condition in rhyme assessments may lead to a more in-depth measure of early phonological awareness skills. Highlights Behavioral and electrophysiological responses were recorded while (pre)literate children made rhyme judgments of rhyming, overlapping and unrelated words. Behaviorally both groups had difficulty judging overlapping pairs as non-rhyming while overlapping and unrelated neural patterns were similar in literates. Preliterates show a different pattern indicating a developing phonological system.
  • Wagner, A. (2013). Cross-language similarities and differences in the uptake of place information. Journal of the Acoustical Society of America, 133, 4256-4267. doi:10.1121/1.4802904.

    Abstract

    Cross-language differences in the use of coarticulatory cues for the identification of fricatives have been demonstrated in a phoneme detection task: Listeners with perceptually similar fricative pairs in their native phoneme inventories (English, Polish, Spanish) relied more on cues from vowels than listeners with perceptually more distinct fricative contrasts (Dutch and German). The present gating study further investigated these cross-language differences and addressed three questions. (1) Are there cross-language differences in informativeness of parts of the speech signal regarding place of articulation for fricative identification? (2) Are such cross-language differences fricative-specific, or do they extend to the perception of place of articulation for plosives? (3) Is such language-specific uptake of information based on cues preceding or following the consonantal constriction? Dutch, Italian, Polish, and Spanish listeners identified fricatives and plosives in gated CV and VC syllables. The results showed cross-language differences in the informativeness of coarticulatory cues for fricative identification: Spanish and Polish listeners extracted place of articulation information from shorter portions of VC syllables. No language-specific differences were found for plosives, suggesting that greater reliance on coarticulatory cues did not generalize to other phoneme types. The language-specific differences for fricatives were based on coarticulatory cues into the consonant.
  • Walters, J., Rujescu, D., Franke, B., Giegling, I., Vasquez, A., Hargreaves, A., Russo, G., Morris, D., Hoogman, M., Da Costa, A., Moskvina, V., Fernandez, G., Gill, M., Corvin, A., O'Donovan, M., Donohoe, G., & Owen, M. (2013). The role of the major histocompatibility complex region in cognition and brain structure: A schizophrenia GWAS follow-up. American Journal of Psychiatry, 170, 877-885. doi:10.1176/appi.ajp.2013.12020226.

    Abstract

    Objective The authors investigated the effects of recently identified genome-wide significant schizophrenia genetic risk variants on cognition and brain structure. Method A panel of six single-nucleotide polymorphisms (SNPs) was selected to represent genome-wide significant loci from three recent genome-wide association studies (GWAS) for schizophrenia and was tested for association with cognitive measures in 346 patients with schizophrenia and 2,342 healthy comparison subjects. Nominally significant results were evaluated for replication in an independent case-control sample. For SNPs showing evidence of association with cognition, associations with brain structural volumes were investigated in a large independent healthy comparison sample. Results Five of the six SNPs showed no significant association with any cognitive measure. One marker in the major histocompatibility complex (MHC) region, rs6904071, showed independent, replicated evidence of association with delayed episodic memory and was significant when both samples were combined. In the combined sample of up to 3,100 individuals, this SNP was associated with widespread effects across cognitive domains, although these additional associations were no longer significant after adjusting for delayed episodic memory. In the large independent structural imaging sample, the same SNP was also associated with decreased hippocampal volume. Conclusions The authors identified a SNP in the MHC region that was associated with cognitive performance in patients with schizophrenia and healthy comparison subjects. This SNP, rs6904071, showed a replicated association with episodic memory and hippocampal volume. These findings implicate the MHC region in hippocampal structure and functioning, consistent with the role of MHC proteins in synaptic development and function. Follow-up of these results has the potential to provide insights into the pathophysiology of schizophrenia and cognition.

    Additional information

    Hoogman_2013_JourAmePsy.supp.pdf
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2013). ERP evidence on the interaction between information structure and emotional salience of words. Cognitive, Affective and Behavioral Neuroscience, 13, 297-310. doi:10.3758/s13415-012-0146-2.

    Abstract

    Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question–answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90–200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention–emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words.
  • Wang, L., Zhu, Z., Bastiaansen, M. C. M., Hagoort, P., & Yang, Y. (2013). Recognizing the emotional valence of names: An ERP study. Brain and Language, 125, 118-127. doi:10.1016/j.bandl.2013.01.006.

    Abstract

    Unlike common nouns, person names refer to unique entities and generally have a referring function. We used event-related potentials to investigate the time course of identifying the emotional meaning of nouns and names. The emotional valence of names and nouns were manipulated separately. The results show early N1 effects in response to emotional valence only for nouns. This might reflect automatic attention directed towards emotional stimuli. The absence of such an effect for names supports the notion that the emotional meaning carried by names is accessed after word recognition and person identification. In addition, both names with negative valence and emotional nouns elicited late positive effects, which have been associated with evaluation of emotional significance. This positive effect started earlier for nouns than for names, but with similar durations. Our results suggest that distinct neural systems are involved in the retrieval of names’ and nouns’ emotional meaning.
  • Wang, L., & Chu, M. (2013). The role of beat gesture and pitch accent in semantic processing: An ERP study. Neuropsychologia, 51(13), 2847-2855. doi:10.1016/j.neuropsychologia.2013.09.027.

    Abstract

    The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently.
  • Wang, M., Shao, Z., Verdonschot, R. G., Chen, Y., & Schiller, N. O. (2023). Orthography influences spoken word production in blocked cyclic naming. Psychonomic Bulletin & Review, 30, 383-392. doi:10.3758/s13423-022-02123-y.

    Abstract

    Does the way a word is written influence its spoken production? Previous studies suggest that orthography is involved only when the orthographic representation is highly relevant during speaking (e.g., in reading-aloud tasks). To address this issue, we carried out two experiments using the blocked cyclic picture-naming paradigm. In both experiments, participants were asked to name pictures repeatedly in orthographically homogeneous or heterogeneous blocks. In the naming task, the written form was not shown; however, the radical of the first character overlapped between the four pictures in this block type. A facilitative orthographic effect was found when picture names shared part of their written forms, compared with the heterogeneous condition. This facilitative effect was independent of the position of orthographic overlap (i.e., the left, the lower, or the outer part of the character). These findings strongly suggest that orthography can influence speaking even when it is not highly relevant (i.e., during picture naming) and the orthographic effect is less likely to be attributed to strategic preparation.
  • Warmelink, L., Vrij, A., Mann, S., Leal, S., & Poletiek, F. H. (2013). The effects of unexpected questions on detecting familiar and unfamiliar lies. Psychiatry, Psychology and law, 20(1), 29-35. doi:10.1080/13218719.2011.619058.

    Abstract

    Previous research suggests that lie detection can be improved by asking the interviewee unexpected questions. The present experiment investigates the effect of two types of unexpected questions: background questions and detail questions, on detecting lies about topics with which the interviewee is (a) familiar or (b) unfamiliar. In this experiment, 66 participants read interviews in which interviewees answered background or detail questions, either truthfully or deceptively. Those who answered deceptively could be lying about a topic they were familiar with or about a topic they were unfamiliar with. The participants were asked to judge whether the interviewees were lying. The results revealed that background questions distinguished truths from both types of lies, while the detail questions distinguished truths from unfamiliar lies, but not from familiar lies. The implications of these findings are discussed.
  • Whelan, L., Dockery, A., Stephenson, K. A. J., Zhu, J., Kopčić, E., Post, I. J. M., Khan, M., Corradi, Z., Wynne, N., O’ Byrne, J. J., Duignan, E., Silvestri, G., Roosing, S., Cremers, F. P. M., Keegan, D. J., Kenna, P. F., & Farrar, G. J. (2023). Detailed analysis of an enriched deep intronic ABCA4 variant in Irish Stargardt disease patients. Scientific Reports, 13: 9380. doi:10.1038/s41598-023-35889-9.

    Abstract

    Over 15% of probands in a large cohort of more than 1500 inherited retinal degeneration patients present with a clinical diagnosis of Stargardt disease (STGD1), a recessive form of macular dystrophy caused by biallelic variants in the ABCA4 gene. Participants were clinically examined and underwent either target capture sequencing of the exons and some pathogenic intronic regions of ABCA4, sequencing of the entire ABCA4 gene or whole genome sequencing. ABCA4 c.4539 + 2028C > T, p.[= ,Arg1514Leufs*36] is a pathogenic deep intronic variant that results in a retina-specific 345-nucleotide pseudoexon inclusion. Through analysis of the Irish STGD1 cohort, 25 individuals across 18 pedigrees harbour ABCA4 c.4539 + 2028C > T and another pathogenic variant. This includes, to the best of our knowledge, the only two homozygous patients identified to date. This provides important evidence of variant pathogenicity for this deep intronic variant, highlighting the value of homozygotes for variant interpretation. 15 other heterozygous incidents of this variant in patients have been reported globally, indicating significant enrichment in the Irish population. We provide detailed genetic and clinical characterization of these patients, illustrating that ABCA4 c.4539 + 2028C > T is a variant of mild to intermediate severity. These results have important implications for unresolved STGD1 patients globally with approximately 10% of the population in some western countries claiming Irish heritage. This study exemplifies that detection and characterization of founder variants is a diagnostic imperative.

    Additional information

    supplemental material
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Willems, R. M. (2013). Can literary studies contribute to cognitive neuroscience? Journal of literary semantics, 42(2), 217-222. doi:10.1515/jls-2013-0011.
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Witteman, M. J. (2013). Lexical processing of foreign-accented speech: Rapid and flexible adaptation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2013). Foreign accent strength and listener familiarity with an accent co-determine speed of perceptual adaptation. Attention, Perception & Psychophysics, 75, 537-556. doi:10.3758/s13414-012-0404-y.

    Abstract

    We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.
  • Witteman, J., Karaseva, E., Schiller, N. O., & McQueen, J. M. (2023). What does successful L2 vowel acquisition depend on? A conceptual replication. In R. Skarnitzl, & J. Volín (Eds.), Proceedings of the 20th International Congress of the Phonetic Sciences (ICPhS 2023) (pp. 928-931). Prague: Guarant International.

    Abstract

    It has been suggested that individual variation in vowel compactness of the native language (L1) and the distance between L1 vowels and vowels in the second language (L2) predict successful L2 vowel acquisition. Moreover, general articulatory skills have been proposed to account for variation in vowel compactness. In the present work, we conceptually replicate a previous study to test these hypotheses with a large sample size, a new language pair and a
    new vowel pair. We find evidence that individual variation in L1 vowel compactness has opposing effects for two different vowels. We do not find evidence that individual variation in L1 compactness
    is explained by general articulatory skills. We conclude that the results found previously might be specific to sub-groups of L2 learners and/or specific sub-sets of vowel pairs.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.

Share this page