Displaying 1 - 77 of 77
-
Hustá, C., Meyer, A. S., & Drijvers, L. (2024). Effects of relatedness between speech planning and comprehension content on attentional distribution - Rapid Invisible Frequency Tagging (RIFT) study. Talk presented at Psycholinguistics in Flanders (PiF 2024). Brussels, Belgium. 2024-05-27 - 2024-05-28.
-
Hustá, C., Meyer, A. S., & Drijvers, L. (2024). Using rapid invisible frequency tagging (RIFT) to probe the attentional distribution between speech planning and comprehension. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
-
Hustá, C., Meyer, A. S., & Drijvers, L. (2024). Using rapid invisible frequency tagging (RIFT) to probe the attentional distribution between speech planning and comprehension. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Hustá, C., Drijvers, L., & Meyer, A. S. (2024). Effects of relatedness between speech planning and comprehension content on attentional distribution - Rapid Invisible Frequency Tagging (RIFT) study. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2024), Edinburgh, Scotland.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2024). Inter-brain synchrony during (un)-successful face-to face communication. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Co-speech hand gestures are used to predict upcoming meaning. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses by improving predictions of upcoming meaning. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Investigating inter-brain synchrony during (un-)successful face-to-face communication. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Studying the association between co-speech gestures, mutual understanding and inter-brain synchrony in face-to-face conversations. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Seijdel, N., & Drijvers, L. (2023). Gestural enhancement of degraded speech comprehension in Autism Spectrum Disorder. Talk presented at the 8th Gesture and Speech in Interaction (GESPIN 2023). Nijmegen, The Netherlands. 2023-09-13 - 2023-09-15.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Human communication requires interlocutors to mutually understand each other. Previous research has suggested inter-brain synchrony as an important feature of social interaction, since it has been observed during joint attention, speech interactions and cooperative tasks. Nonetheless, it is still unknown whether inter-brain synchrony is actually related to successful face-to-face communication. Here, we use dual-EEG to study if inter-brain synchrony is modulated during episodes of successful and unsuccessful communication in clear and noisy communication settings. Dyads performed a tangram-based referential communication task with and without background noise, while both their EEG and audiovisual behavior was recorded. Other-initiated repairs were annotated in the audiovisual data and were used as indexes of unsuccessful and successful communication. More specifically, we compared inter-brain synchrony during episodes of miscommunication (repair initiations) and episodes of mutual understanding (repair solutions and acceptance phases) in the clear and the noise condition. We expect that when communication is successful, inter-brain synchrony will be stronger than when communication is unsuccessful, and we expect that these patterns will be most pronounced in the noise condition. Results are currently being analyzed and will be presented and discussed with respect to the inter-brain neural signatures underlying the process of mutual understanding in face-to-face conversation. -
Rubianes, M., Drijvers, L., Jiménez-Ortega, L., Muñoz, F., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2023). Can emotional facial expressions influence spoken language processing?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2023). Attention drives visual processing and audiovisual integration during multimodal communication. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Ter Bekke, M., Holler, J., & Drijvers, L. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic gestures to predict upcoming words?. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Gestures speed up responses to questions. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Drijvers, L., & Holler, J. (2022). Spatial orientation influences cognitive processing in conversation. Talk presented at the 18th NVP Winter Conference on Brain and Cognition. Egmond aan Zee, The Netherlands. 2022-04-28 - 2022-04-30.
-
Drijvers, L. (2022). How does the brain integrate speech and gestures?. Talk presented at the IMPRS Conference 2022. Nijmegen, the Netherlands. 2022-06-01 - 2022-06-03.
-
Drijvers, L. (2022). Multimodal language in the brain. Talk presented at a Psycholinguistics Colloquium, Humboldt University. online. 2022-01-24.
-
Drijvers, L. (2022). Towards a multimodal view on the neurobiology of language. Talk presented at Neurobiology of Language: Key Issues and Ways Forward II. online. 2022-03-17 - 2022-03-18.
-
Drijvers, L., & Holler, J. (2022). Face-to-face spatial orientation fine-tunes the brain for neurocognitive processing in conversation. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at Neurobiology of Language: Key Issues and Ways Forward II, online.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Seijdel, N., Schoffelen, J. M., Hagoort, P., & Drijvers, L. (2022). Using RIFT to study the role of lower frequency oscillations in sensory processing and audiovisual integration. Poster presented at Neurobiology of Language: Key Issues and Ways Forward II, Nijmegen, NL.
-
Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2022). Using RIFT to study the role of lower frequency oscillations in sensory processing and audiovisual integration. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2022). Using RIFT to study the role of lower frequency oscillations in sensory processing and audiovisual integration. Poster presented at the IMPRS conference 2022, Nijmegen, NL.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2022). Hand gestures speed up responses to questions. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Drijvers, L. (2021). The multimodal facilitation effect. Talk presented at ESLP 2021 (Embodied & Situated Language Processing). online. 2021-09-20 - 2021-09-29.
-
Drijvers, L. (2020). Rapid Invisible Frequency Tagging for language research. Talk presented at Neuroxillations, University of Oxford. online. 2020-06-22.
-
Drijvers, L. (2020). Studying multimodal language processing in interactive settings with Rapid Invisible Frequency Tagging. Talk presented at Speech Science Forum, University College London. online. 2020-12-17.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. Talk presented at the 7th Gesture and Speech Interaction (GESPIN 2020). online. 2020-09-07 - 2020-09-09.
-
Drijvers, L. (2019). Handbewegingen en het brein: Hoe je hersenen ervoor zorgen dat je iemand kunt horen en zien praten. Talk presented at Werkoverleg Amsterdamse Psycholinguisten, University of Amsterdam. Amsterdam, The Netherlands. 2019-04-18.
-
Drijvers, L. (2019). Speech-gesture integration in clear and adverse listening conditions. Talk presented at the Adverse Listening Condition Workshop, VU Medical Center. Amsterdam, The Netherlands. 2019-10-03.
-
Drijvers, L., Spaak, E., Herring, J., Ozyurek, A., & Jensen, O. (2019). Selective routing and integration of speech and gestural information studied by rapid invisible frequency tagging. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
-
Drijvers, L. (2019). The neural mechanisms of how iconic gestures boost degradedspeech comprehension in native and non-native listeners. Talk presented at the Gesture-Sign Workshop Prague 2019: Converging the Perspectives on Theories, Methods, and Applications. Prague, Czech Republic. 2019-05-16 - 2019-05-17.
-
Blokpoel, M., Dingemanse, M., Kachergis, G., Bögels, S., Drijvers, L., Eijk, L., Ernestus, M., De Haas, N., Holler, J., Levinson, S. C., Lui, R., Milivojevic, B., Neville, D., Ozyurek, A., Rasenberg, M., Schriefers, H., Trujillo, J. P., Winner, T., Toni, I., & Van Rooij, I. (2018). Ambiguity helps higher-order pragmatic reasoners communicate. Talk presented at the 14th biannual conference of the German Society for Cognitive Science, GK (KOGWIS 2018). Darmstadt, Germany. 2018-09-03 - 2018-09-06.
-
Drijvers, L. (2018). How do native and non-native listeners integrate speech and gestures?. Talk presented at the Naturalis Museum / University of Leiden. Leiden, The Netherlands. 2018-01-26.
-
Drijvers, L. (2018). Handbewegingen en het brein. Talk presented at the NEMO Science Night, NEMO Museum. Amsterdam, The Netherlands. 2018-11-18.
-
Drijvers, L. (2018). Neural dynamics underlying speech-gesture integration in native and non-native listeners. Talk presented at the SpAM - Speech in the Age of Multimodal Humanities Conference. Pisa, Italy. 2018-10-11.
-
Drijvers, L. (2018). On the neural integration of gestures and speech in adverse listening conditions. Talk presented at the Donders Centre for Cognition Language Division. Nijmegen, The Netherlands. 2018-01-08.
-
Drijvers, L. (2018). Oscillatory dynamics underlying speech-gesture integration. Talk presented at the Max Planck Institute for Human Cognitive and Brain Sciences. Leipzig, Germany. 2018-03-02.
-
Drijvers, L. (2018). Rapid-frequency tagging in speech-gesture integration. Talk presented at the Centre for Human Brain Health, University of Birmingham. Birmingham, UK. 2018-04-17.
-
Drijvers, L. (2018). Speech-gesture integration studied by rapid-frequency tagging. Talk presented at the Attention & Oscillations Workshop, Centre for Human Brain Health. Birmingham, UK. 2018-11-16.
-
Drijvers, L., Spaak, E., Herring, J., Ozyurek, A., & Jensen, O. (2018). Selective routing and integration of speech and gestural information studied by rapid invisible frequency tagging. Poster presented at the Attention to Sound Meeting, Chicheley, UK.
-
Drijvers, L. (2017). Communicating science to the masses. Talk presented at IMPRS Introduction days. Nijmegen, The Netherlands. 2017-09-25.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Alpha and beta oscillations in the language network, motor and visual cortex index semantic congruency between speech and gestures in clear and degraded speech. Poster presented at the 47th Annual Meeting of the Society for Neuroscience (SfN), Washington, DC, USA.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Alpha and beta oscillations in the language network, motor and visual cortex index the semantic integration of speech and gestures in clear and degraded speech. Poster presented at the Ninth Annual Meeting of the Society for the Neurobiology of Language (SNL 2017), Baltimore, MD, USA.
-
Drijvers, L. (2017). How do we hear and see speech in a noisy bar?. Talk presented at Neerlandistiek in het Nieuws, Faculty of Arts, Radboud University. Nijmegen, The Netherlands. 2017-01-26.
-
Drijvers, L. (2017). How does our brain hear and see language?. Talk presented at the School of Psychology, University of Birmingham. Birmingham, UK. 2017.
-
Drijvers, L. (2017). How does our brain hear and see language?. Talk presented at Radboud Summerschool 'From Molecule to Brain'. Nijmegen, The Netherlands. 2017-08-15.
-
Drijvers, L. (2017). How do gestures contribute to understanding language?. Talk presented at OBA Amsterdam. Amsterdam, The Netherlands. 2017-02-28.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2017). Low- and high-frequency oscillations predict the semantic integration of speech and gestures in clear and degraded speech. Poster presented at the Neural Oscillations in Speech and Language Processing symposium, Berlin, Germany.
-
Drijvers, L. (2017). The neural mechanisms of how iconic gestures boost degraded speech comprehension. Talk presented at the workshop Types of iconicity in language use, development, and processing. Nijmegen, The Netherlands. 2017-07-06 - 2017-07-07.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Talk presented at Sensorimotor Speech Processing Symposium. London, UK. 2016-08-16.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis. -
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraged speech comprehension engages the language network, motor cortex and visual cortex. Talk presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC). Nijmegen, The Netherlands. 2016-10-31 - 2016-11-01.
-
Drijvers, L. (2017). Left-temporal alpha and beta suppression predicts L2 listeners' benefit of gestures during clear and degraded speech comprehension. Talk presented at the Donders Discussions 2017. Nijmegen, the Netherlands. 2017-10-26 - 2017-10-27.
-
Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Face-to-face communication consists of integrating speech and visual input, such as co-speech gestures. Iconic gestures (e.g. a drinking gesture) can enhance speech comprehension, especially when speech is difficult to comprehend, such as in noise (e.g. Holle et al., 2010) or in non-native speech comprehension (e.g. Sueyoshi & Hardison, 2005). Previous behavioral and neuroimaging studies have argued that the integration of speech and gestures is stronger when speech intelligibility decreases (e.g. Holle et al., 2010), but that in clear speech, non-native listeners benefit more from gestures than native listeners (Dahl & Ludvigson, 2014; Sueyoshi & Hardison, 2005). So far, the neurocognitive mechanisms of how non-native speakers integrate speech and gestures in adverse listening conditions remain unknown. We investigated whether high-proficient non-native speakers of Dutch make use of iconic co-speech gestures as much as native speakers during clear and degraded speech comprehension. In an EEG study, native (n = 23) and non-native (German, n = 23) speakers of Dutch watched videos of an actress uttering Dutch action verbs. Speech was presented either as clear or degraded by applying noise-vocoding (6-band), and accompanied by a matching or mismatching iconic gesture. This allowed us to calculate both the effects of speech degradation and semantic congruency of the gesture on the N400 component. The N400 was taken as an index of semantic integration effort (Kutas & Federmeier, 2011). In native listeners, N400 amplitude was sensitive to mismatches between speech and gesture and degradation; the most pronounced N400 was found in response to degraded speech and a mismatching gesture (DMM), followed by degraded speech and a matching gesture (DM), clear speech and a mismatching gesture (CMM), and clear speech and a matching gesture (CM) (DMM>DM>CMM>CM, all p < .05). In non-native speakers, we found a difference between CMM and CM but not DMM and DM. However, degraded conditions differed from clear conditions (DMM=DM>CMM>CM, all significant comparisons p < .05). Directly comparing native to non-native speakers, the N400 effect (i.e. the difference between CMM and CM / DMM and DM) was greater for non-native speakers in clear speech, but for native speakers in degraded speech. These results provide further evidence for the claim that in clear speech, non-native speakers benefit more from gestural information than native speakers, as indexed by a larger N400 effect for mismatch manipulation. Both native and non-native speakers show integration effort during degraded speech comprehension. However, native speakers require less effort to recognize auditory cues in degraded speech than non-native speakers, resulting in a larger N400 for degraded speech and a mismatching gesture for natives than non-natives. Conversely, non-native speakers require more effort to resolve auditory cues when speech is degraded and can therefore not benefit as much from auditory cues to map the semantic information from gesture to as native speakers. In sum, non-native speakers can benefit from gestural information in speech comprehension more than native listeners, but not when speech is degraded. Our findings suggest that the native language of the listener modulates multimodal semantic integration in adverse listening conditions. -
Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Oscillatory and temporal dynamics show engagement of the language network, motor system and visual cortex during gestural enhancement of degraded speech. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.
-
Drijvers, L., & Ozyurek, A. (2016). What do iconic gestures and visible speech contribute to degraded speech comprehension?. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
-
Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
-
Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do gestures and lip movements contribute to degraded speech comprehension?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.
Abstract
Natural, face-to-face communication consists of an audiovisual binding that integrates speech and visual information, such as iconic co-speech gestures and lip movements. Especially in adverse listening conditions such as in noise, this visual information can enhance speech comprehension. However, the contribution of lip movements and iconic gestures to understanding speech in noise has been mostly studied separately. Here, we investigated the contribution of iconic gestures and lip movements to degraded speech comprehension in a joint context. In a free-recall task, participants watched short videos of an actress uttering an action verb. This verb could be presented in clear speech, severely degraded speech (2-band noise-vocoding) or moderately degraded speech (6-band noise-vocoding), and could view the actress with her lips blocked, with her lips visible, or with her lips visible and making an iconic co-speech gesture. Additionally, we presented these clips without audio and with just the lip movements present, or with just lip movements and gestures present, to investigate how much information listeners could get from visual input alone. Our results reveal that when listeners perceive degraded speech in a visual context, listeners benefit more from gestural information than from just lip movements alone. This benefit is larger at moderate noise levels where auditory cues are still moderately reliable than compared to severe noise levels where auditory cues are no longer reliable. As a result, listeners are only able to benefit from this additive effect of ‘double’ multimodal enhancement of iconic gestures and lip movements when there are enough auditory cues present to map lip movements to the phonological information in the speech signal -
Lockwood, G., Drijvers, L., Hagoort, P., & Dingemanse, M. (2016). In search of the kiki-bouba effect. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
The kiki-bouba effect, where people map round shapes onto round sounds (such as [b] and [o]) and spiky shapes onto “spiky” sounds (such as [i] and [k]), is the most famous example of sound symbolism. Many behavioural variations have been reported since Köhler’s (1929) original experiments. These studies examine orthography (Cuskley, Simner, & Kirby, 2015), literacy (Bremner et al., 2013), and developmental disorders (Drijvers, Zaadnoordijk, & Dingemanse, 2015; Occelli, Esposito, Venuti, Arduino, & Zampini, 2013). Some studies have suggested that the cross-modal associations between linguistic sound and physical form in the kiki-bouba effect are quasi-synaesthetic (Maurer, Pathman, & Mondloch, 2006; Ramachandran & Hubbard, 2001). However, there is a surprising lack of neuroimaging data in the literature that explain how these cross-modal associations occur (with the exceptions of Kovic et al. (2010)and Asano et al. (2015)). We presented 24 participants with randomly generated spiky or round figures and 16 synthesised, reduplicated CVCV (vowels: [i] and [o], consonants: [f], [v], [t], [d], [s], [z], [k], and [g]) nonwords based on Cuskley et al. (2015). This resulted in 16 nonwords across four conditions: full match, vowel match, consonant match, and full mismatch. Participants were asked to rate on a scale of 1 to 7 how well the nonword fit the shape it was presented with. EEG was recorded throughout, with epochs timelocked to the auditory onset of the nonword. There were significant behavioural effects of condition (p<0.0001). Bonferroni t-tests show participants rated full match more highly than full mismatch nonwords. However, there was no reflection of this behavioural effect in the ERP waveforms. One possible reason for the absence of an ERP effect is that this effect may jitter over a broad latency range. Currently oscillatory effects are being analysed, since these are less dependent on precise time-locking to the triggering events. -
Lockwood, G., van Leeuwen, T. M., Drijvers, L., & Dingemanse, M. (2016). Synaesthesia and sound-symbolism — insights from the Groot Nationaal Onderzoek project. Poster presented at the Synesthesia and Cross-Modal Perception, Dublin, Ireland.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Van Leeuwen, T. M., Dingemanse, M., Lockwood, G., & Drijvers, L. (2016). Color associations in nonsynaesthetes and synaesthetes: A large-scale study in Dutch. Talk presented at the Synesthesia and Cross-Modal Perception. Dublin, Ireland. 2016-04-22.
-
Drijvers, L., & Ozyurek, A. (2015). Visible speech enhanced: What do gestures and lips contribute to speech comprehension in noise?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
-
van Leeuwen, T. M., Dingemanse, M., Lockwood, G., & Drijvers, L. (2015). Groot Nationaal Onderzoek (Large National Survey): "How well do your senses work together?". Poster presented at the Donders Sessions, Nijmegen, the Netherlands.
Share this page