Stephen C. Levinson

Presentations

Displaying 1 - 23 of 23
  • Bögels, S., Casillas, M., & Levinson, S. C. (2016). To plan or to listen? The trade-off between comprehension and production in conversation. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Transitions between speakers in conversation are usually smooth, lasting around 200 milliseconds. Such rapid response latencies suggest that, at least sometimes, responders must begin planning their response before the ongoing turn is finished. Indeed, evidence from EEG suggests that listeners start planning their responses to questions as soon as they can, often midway through the incoming turn [1]. But given substantial overlap in the neural hardware for language production and comprehension, early response planning might incur a cost on participants’ concurrent comprehension of the ongoing turn. Do early responses come at the expense of less careful listening? We performed an EEG study in which participants played an interactive game with a confederate partner. Participants saw two pictures on their screen (e.g., a banana and a pineapple), then heard a (prerecorded) question from their partner, and then responded verbally by naming the correct picture. Participants were made to believe that their partner spoke to them live. Examples of the conditions in the experiment: 1. Early planning: 'Which object is curved and is considered to be fruit/healthy?'; 2. Late planning: 'Which object is considered to be fruit/healthy and is curved?' (response: 'the banana'). The questions were designed such that participants could start planning their response early (Example 1) or late (Example 2) in the turn. Crucially, in another part of the turn, we included either an expected word (e.g., 'fruit') or an unexpected one (e.g., 'healthy') to elicit a differential N400 effect. Our aims were two-fold: replicating the prior planning effect [1] and testing the effect of planning on comprehension. First, our results largely replicated the earlier study [1], showing a large positivity in the ERPs and an alpha/beta reduction in the time-frequency domain, both immediately following the onset of the critical information when participants could have first started planning their verbal response (i.e., 'curved'). As before [1], we interpret these effects as indicating the start of response planning. Second, and more importantly, we hypothesized that the N400 effect (the ERP difference between 'fruit' and 'healthy') would be attenuated when participants were already planning a response (i.e., in early vs. late planning). In contrast, we found an N400 effect of similar size in both the early and late planning conditions, although a small late positivity was only found in the late planning condition. Interestingly, we found a positive correlation between participants' overall response time and the size of the N400 effect after planning had started (i.e., in early planning), illustrating a trade-off between comprehension and production during turn taking. That is, quick responders showed a smaller N400 effect. We argue that their focus on production planning reduced their attention to the incoming audio signal and probably also their predictive processing, leading to a smaller N400 effect. Slow responders focused instead on the audio signal, preserving their N400 effect but delaying their response. Reference [1]: Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. Topic Area: Meaning: Discourse and Pragmatics
  • Byun, K.-S., Roberts, S. G., De Vos, C., Levinson, S. C., & Zeshan, U. (2016). Content-biased and coordination-biased selection in the evolution of expressive forms in cross-signing. Talk presented at the International Society for Gesture Studies. Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    This paper studies communication among deaf sign language users with highly divergent linguistic backgroun
    ds
    who have no signed or written language in common. It constitutes the earliest, least conventionalised stages of
    improvised communication, called ”cross
    -
    signing” (Zeshan 2015), as opposed to the semi
    -
    conventionalised
    contact language International Sign (
    e.g. Supalla & Webb 1995). The specific focus here is on the evolution of the
    shared repertoire amongst signers over several weeks as they co
    -
    construct meaning across linguistic and
    cultural boundaries. We look at two possible factors influencing the selec
    tion of expressive forms (cf. Tamariz et
    al. 2014): content
    -
    bias (where the more iconically
    -
    motivated, and/or easily
    -
    articulated form is selected) and
    coordination
    -
    bias (where participants attempt to match each other’s usag
    e). The data set consists of a 320
    -
    minute corpus of first encounters between dyads of signers of Nepali Sign Language, Indian Sign Language,
    Jordanian Sign Language and Indonesian Sign Language. Recordings took place at the first meeting, after one
    week, a
    nd after three weeks. The participants vary naturally with regard to their linguistic and international
    experience as well as their age of sign language acquisition. In addition to spontaneous conversations, we
    collected structured dialogues using a Direct
    or
    -
    Matcher task. In this language elicitation game, the Director has
    the coloured images and the Matcher has identical but black and white images alongside a set of colour chips
    from which they need to select based on the Director’s descriptions. We coded
    and examined the various colour
    expressions exploited by the participants. The semantic field of colour was chosen for this investigation into the
    evolution of shared communication for two reasons: the visual domain of colour retains sufficient levels of
    a
    bstraction while affording signers with iconic potential.
    Participants initially used a range of strategies, including pointing, articulating signs for common objects with
    that colour (e.g. referring to a common iconic sign for ‘tree’ and pointing to the b
    ase to mean ‘brown’), and their
    own native variants. However, three weeks later these individuals all start using the same forms, e.g. the Indian
    signer’s variant for ‘green’ and the Nepali signer’s improvised ‘tree
    -
    trunk’ variant for ‘brown’. The iconic
    m
    otivation of the latter and the ease of articulation of the former suggest that the content
    -
    bias is in play. The
    coordination
    -
    bias also seems influential in the group’s eventual selection of one variant (cf. Tamariz et al. 2014).
    We explore these and furth
    er factors that may affect the two biases in the selection of forms within our data.
    We also consider participants’ meta
    -
    linguistic skills (Zeshan 2013) and fluency in multiple sign
    languages (Byun
    et al. in preparation).
  • Byun, K.-S., De Vos, C., Levinson, S. C., & Zeshan, U. (2016). Repair strategies and recursion as evidence of individual differences in metalinguistic skill in Cross-signing. Poster presented at the 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia.
  • Byun, K.-S., Levinson, S. C., Zeshan, U., & De Vos, C. (2016). Success rates of conversational repair strategies by cross-signers. Poster presented at the 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia.
  • Casillas, M., Brown, P., & Levinson, S. C. (2016). Communicative development in a Mayan village. Talk presented at the Tilburg University. Tilburg, The Netherlands.
  • Hömke, P., Holler, J., & Levinson, S. C. (2016). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
  • Levinson, S. C. (2016). Empathy and the early stages of language evolution. Talk presented at the Leverhulme Centre for Human Evolutionary Studies. Cambridge, UK. 2016-03-08.
  • Levinson, S. C., & Brown, P. (2016). Comparative feedback: Cultural shaping of response systems in interaction. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.

    Abstract

    There is some evidence that systems of minimal response (‘feedback’, ‘back-channel’, ‘reactive tokens’) may vary systematically across speakers of different languages and cultural backgrounds (e.g., Maynard, 1986; Clancy et al., 1996). The questions we address here are these: what is the nature of such differences? And what difference do they make to how do these differences affect the interactional system as a whole? We explore this these questions by looking in detail at conversational data from two languages and cultures: Y ́elˆı Dnye, spoken on Rossel Island (Papua New Guinea), and Tzeltal Mayan, spoken in southern Mexico. The Rossel system is gaze-based, interlocutors tend to maintain a high level of mutual gaze, and a large proportion of feedback signals – many nonverbal – occur during the production of the turn that is being reacted to. Tzeltal speakers, in contrast, practice gaze avoidance, and produce very few visual feedback signals, but instead relying on frequent verbal response signals at the end of each TCU, and an elaborate convention of repeating (parts of) the prior turn to display understanding and agreement. We outline the repertoire of response tokens for each language, illustrate their differential usage, and suggest some consequences of these properties of turn-taking systems for interactional style and for on- line processing
  • Levinson, S. C. (2016). The Interaction Engine hypothesis. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

    Abstract

    Along with complexity, the extent of the variability of human language across social groups is unprecedented in the animal kingdom, and we need to understand how this is possible. An underdetermined innate basis is plausible, but there is no consensus about what it is (other than vocal learning and the vocal apparatus), or how it would make it possible to learn varied languages. An alternative, and potentially complementary, explanation suggests that there is a set of communicative instincts and motivations that together make it possible for the infant to bootstrap into the local language, whatever it may be. Some evidence for this is as follows. First, the organization of informal interactive human communication – the core niche for language use – looks much less variable than languages. Thus all language users in this niche take rapid turns at talking even though the speed of this is highly demanding. Similarly, all users avail themselves of the same mechanisms for repairing miscommunication, and use the same restricted system for building coherent dialogues. Second, long before infants have any linguistic knowledge, they take part in ‘proto-conversations’ that exhibit these same universal organizations. Third, where normal spoken language is not accessible to individuals (as when they are profoundly deaf), they still share the same communicative infrastructure. Finally, there are some signs of phylogenetic parallels in other primates. If this is correct, Darwin’s characterization of language as “an instinct to acquire an art” may have its root in communicative instincts as much as specific instincts about language structure
  • Toni, I., & Levinson, S. C. (2016). Communication with and before language [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • De Vos, C., Casillas, M., Crasborn, O., & Levinson, S. C. (2016). Linguistic cues enabling rapid conversational turn-taking in sign. Talk presented at the Language seminar, Institut Jean-Nicod. Paris, France. 2016-10-26.
  • De Vos, C., Casillas, M., Crasborn, O., & Levinson, S. C. (2016). Linguistic cues enabling rapid conversational turn-taking in sign. Talk presented at the Research Training Group 2070 "Understanding Social Relationships". Göttingen, Germany. 2016-11-07.
  • De Vos, C., Casillas, M., Crasborn, O., & Levinson, S. C. (2016). Linguistic cues enabling rapid conversational turn-taking in Sign Language of the Netherlands. Talk presented at the Grammar & Cognition colloquium. Nijmegen, The Netherlands. 2016-12-08.
  • De Vos, C., Casillas, M., Crasborn, O., & Levinson, S. C. (2016). Stroke-to-stroke turn-boundary prediction in Sign Language of the Netherlands. Poster presented at the 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia.
  • De Vos, C., Casillas, M., Crasborn, O., & Levinson, S. C. (2016). The role of facial expressions in the anticipation of turn-ends. Talk presented at the International Gesture Conference (ISGS 2016). Paris, France. 2016-07-18 - 2016-07-22.
  • Janzen, G., Haun, D. B. M., & Levinson, S. C. (2010). Neural correlates of relative and intrinsic frames of reference. Poster presented at HBM 2010 - The 16th Annual Meeting of the Organization for Human Brain Mapping, Barcelona, Spain.

    Abstract

    Introduction:
    Underlying spatial memory and talking about spatial layouts are common cognitive processes (Haun et al. 2005). For example, to locate an object in space it is obligatory to choose a coordinate system called frame of reference in cognition as well as in its verbal expression. Coding space within different frames of reference requires different cognitive processes (e.g. Neggers et al. 2005). In relative frames of reference the origin of the coordinate system is the viewpoint of a person. In intrinsic frames of reference an object is located in relation to another object (Levinson 2003). FMRI data have suggested that different frames of reference show different patterns of neural activation (Burgess et al. 2002; Committeri et al. 2004). However, the number of existing frames of reference and their neural correlates remain controversial. In an event-related fMRI study we investigated whether differential neural networks for relative and intrinsic frames of reference can be isolated.
    Methods:
    In the present study an implicit sentence picture matching task was used to investigate differential neural correlates for relative and intrinsic frames of reference. Twenty-eight healthy human adults (16 women, 12 men) read a sentence describing a spatial scene followed by a picture, and decided whether the sentence matches the picture or not. Feedback was given either supporting a relative or an intrinsic frame of reference. After half of the trails the feedback switched from one reference frame to the respective other reference frame (Fig.1). Participants were instructed to respond as accurately and as quickly as possible. They responded with their right hand by pressing a key with the index finger for a correct decision and a second key with the middle finger for an incorrect judgment. Two baseline tasks were included (Fig.1): a high level baseline (c5) and a low level baseline (c6).
    A 3 Tesla MRI system (Siemens TRIO, Erlangen, Germany) was used to acquire functional images of the whole brain. Using a gradient-echo echo planar scanning sequence 36 axial slices were obtained for each participant (voxel-size 3 x 3 x 3 mm, TR = 2310 ms, field of view = 192, TE = 30 ms, flip angle = 75). All functional images were acquired in one run that lasted for 50 minutes. Following the acquisition of functional images a high-resolution anatomical scan (T1-weighted MP-RAGE, 176 slices) was acquired. FMRI data were analyzed using BrainVoyager QX (Brain Innovation, Maastricht, The Netherlands). Random-effects whole brain group analyses were performed. The statistical threshold at the voxel level was set at p < 0.001, uncorrected for multiple comparisons.
    Results:
    Intrinsic trials as compared to baseline trials revealed increased activity in the parietal lobe and in the parahippocampal gyrus. Relative as compared to baseline trails revealed a widespread network of activity. Increased activity was observed in occipitotemporal cortices, in the parietal lobe, and in frontal areas.
    We focused on the direct comparison between relative and intrinsic trials. Results showed increased activity in the left parahippocampal gyrus only for intrinsic trials as compared to relative trails. An ANOVA of the averaged beta-weights with the within factors Reference frame and Condition and the between factor Block order (relative-intrinsic and intrinsic-relative), obtained for all voxels in the parahippocampal gyrus, showed no main effect of Reference frames and Condition. A significant interaction between the factors Reference frame and Condition was observed (p < 0.05). T-contrasts showed a significant effect for intrinsic (c4) as compared to relative trials (c3; p < 0.001).
    Conversely, relative as compared to intrinsic trials showed strong increased activity in the left medial frontal gyrus. An ANOVA of the beta-weights in the brain area showed no main effects. A significant interaction between the factors Reference frame and Condition was observed (p < 0.05). T-contrasts showed a significant effect for intrinsic (c4) as compared to relative trials (c3, p < 0.01).
    When comparing all intrinsic and relative conditions together to the baseline we observed increased activity in the right and left frontal eye fields (Fig. 2). An ANOVA of the averaged beta-weights with the within factors Reference frame and the between factor Block order obtained for all voxels in the left frontal eye fields showed a main effect of Block order (p < 0.001) and an trend effect of Reference frame (p = 0.08). An ANOVA of the averaged beta-weights for all voxels in the right frontal eye fields showed a main effect of Block order (p < 0.05) only.
    Conclusions:
    Using a sentence-picture matching task, we investigated whether differential neural correlates for intrinsic and relative frames of reference can be isolated. Intrinsic trials compared to relative trials showed increased activity in the parahippocampal gyrus whereas relative trails compared to intrinsic trials revealed increased neural activity in the frontal and parietal lobe. Both frames of reference together compared to a baseline show increased activity in the frontal eye fields which was stronger for the second block. This could be related to switching of reference frames (Wallentin et al. 2008). The present results confirm studies which report the parietal lobe to be involved in relative coding (Cohen & Andersen 2002). The neural correlates of intrinsic frames of reference were previously less well investigated. The present results show differential neural networks for both frames of reference that are crucial to spatial language.
    References:
    Burgess, N. (2002), 'The human hippocampus and spatial and episodic memory', Neuron, vol. 36, pp. 625-641.
    Cohen, Y. (2002), 'A common reference frame for movement plans in the posterior parietal cortex', Nature Reviews Neuroscience, vol. 3, pp. 553-562.
    Committeri, G. (2004), 'Reference frames for spatial cognition: Different brain areas are involved in viewer-, object-, and landmark centered judgments about object location', Cognitive Neuroscience, vol. 16, pp. 1517-1535.
    Haun, D. (2005), 'Bias in spatial memory: a categorical endorsement', Acta Psychologia, vol. 118, pp. 149-170.
    Levinson, S. (2003), 'Space in language and cognition: Explorations in cognitive diversity', Cambridge: CUP.
    Neggers, S. (2005), 'Quantifing the interactions between allo- and egocentric representation of space', Acta Psychologia, vol. 118, pp. 25-45.
    Wallentin, M. (2008), 'Frontal eye fields involved in shifting frames of reference within working memory for scenes', Neuropsychologia, vol. 46, pp. 399-408.
  • Levinson, S. C. (2010). Action in interaction. Talk presented at the Action Ascription in Social Interaction Workshop. University of California. Los Angeles. 2010-10-07 - 2010-10-11.

    Abstract

    Action in interaction Since the core matrix for language use is interaction, the main job of language is not to express propositions or abstract meanings, but to deliver actions. For in order to respond in interaction we have to ascribe to the prior turn a primary ‘action’ – variously thought of as an ‘illocution’, ‘speech act’, ‘move’, etc. – to which we then respond. The analysis of interaction also relies heavily on attributing actions to turns, so that, e.g., sequences can be characterized in terms of actions and responses. Yet the process of action ascription remains way understudied. We don’t know much about how it is done, when it is done, nor even what kind of inventory of possible actions might exist, or the degree to which they are culturally variable. The study of action ascription remains perhaps the primary unfulfilled task in the study of language use, and it needs to be tackled from conversationanalytic,psycholinguistic, cross-linguistic and anthropological perspectives. In this talk I try to take stock of what we know, and derive a set of goals for and constraints on an adequate theory. Such a theory is likely to employ, I will suggest, a top-down plus bottom-up account of action perception, and a multi-level notion of action which may resolve some of the puzzles that have repeatedly arisen.
  • Levinson, S. C. (2010). Hunter-gatherers and semantic categories: A review of the issues. Talk presented at the International workshop: Hunter-gatherers and semantic categories. Neuwied, Germany. 2010-05-31 - 2010-06-04.
  • Levinson, S. C. (2010). Linguistic diversity and the interaction engine. Talk presented at The 2010 Annual Meeting of the Linguistics Association of Great Britain [Henry Sweet Lecture 2010]. Leeds, UK. 2010-09-01 - 2010-09-02.

    Abstract

    Linguistic diversity and the 'interaction engine'. In this lecture I argue that our new insights into linguistic diversity require a rethink about the foundations of language. In the first part of the lecture, I outline why strong theories of language universals now look untenable. Combining typological and phylogenetic data suggests that languages are largely structured by cultural evolution, rather than a specific ‘language instinct’. In the second part, I turn to the implications: What then is the nature of the human endowment for language? I argue that there is a substantial infrastructure for language, which is distinct from language itself, and strongly universal, the ‘interaction engine’ of the title. The infrastructure involves speech capacities of course (vocal learning, vocal apparatus), intention-recognition systems (the pragmatics of Gricean meaningnn), and ethological properties of communicative interaction (turn-taking, structured interaction sequences, multimodal signals, etc.). A working hypothesis is that this base, together with general (non-language-specialized) properties of human cognition, provides enough foundation for infants to bootstrap into their local cultural linguistic tradition.
  • Levinson, S. C. (2010). The evolutionary revolution in the language sciences. Talk presented at the Symposium on Evolutionary Perspectives on the Human Sciences. Turku, Finland. 2010-05-21 - 2010-05-22.

    Abstract

    The language sciences are about to undergo dramatic changes. The cognitive sciences have taken their object of enquiry to be the characterization of The Human Mind, and the language sciences have focused on the characterization of The Language Instinct, or Universal Grammar. This abstraction away from variation and diversity is now significantly inhibiting research – after all, diversity at all levels is a unique feature of our communication system compared to all other species. Further, the universals which have been the goal of linguistic research have evaporated in the face of increasing information about linguistic diversity and language change. The alternative Darwinian paradigm embraces the new facts about cognitive and linguistic diversity, viewing variation as the fuel for evolution, and adopts a diachronic perspective in which we can ask about the relative roles of biological and cultural evolution and their interaction. New findings suggest that language diversity is largely a product of cultural evolution under the constraints of general cognitive capacities rather than being tightly constrained by either Universal Grammar or Greenbergian universals.
  • Levinson, S. C. (2010). Speech acts in action (and interaction). Talk presented at Sentence Types, Sentence Moods and Illocutionary Forces: An International Conference to honor Manfred Bierwisch. Berlin. 2010-11-04 - 2010-11-06.
  • Majid, A., & Levinson, S. C. (2010). The shaping of language of perception across cultures [Keynote lecture]. Talk presented at the Humanities of the Lesser-Known Conference [HLK 2010]. Lund, Sweden. 2010-09-11.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is “wired-up” and to what extent is it a question of local cultural preoccupation? The “Language of Perception” project tests the hypothesis that some perceptual domains may be more “ineffable” – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages, followed closely by visual phenomena, such as colour and shape. Olfaction appears to be the least codable across cultures. Nevertheless, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.
  • Majid, A., & Levinson, S. C. (2010). The language of perception across cultures. Talk presented at the XXth Congress of European Chemoreception Research Organization, Symposium on "Senses in language and culture". Avignon, France. 2010-09-14 - 2010-09-19.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is ―wired-up‖ and to what extent is it a question of local cultural preoccupation? The ―Language of Perception‖ project tests the hypothesis that some perceptual domains may be more ―ineffable‖ – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages. Moreover, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.

Share this page