Displaying 1 - 14 of 14
-
Casillas, M., Brown, P., & Levinson, S. C. (2021). Early language experience in a Papuan community. Journal of Child Language, 48(4), 792-814. doi:10.1017/S0305000920000549.
Abstract
The rate at which young children are directly spoken to varies due to many factors, including (a) caregiver ideas about children as conversational partners and (b) the organization of everyday life. Prior work suggests cross-cultural variation in rates of child-directed speech is due to the former factor, but has been fraught with confounds in comparing postindustrial and subsistence farming communities. We investigate the daylong language environments of children (0;0–3;0) on Rossel Island, Papua New Guinea, a small-scale traditional community where prior ethnographic study demonstrated contingency-seeking child interaction styles. In fact, children were infrequently directly addressed and linguistic input rate was primarily affected by situational factors, though children’s vocalization maturity showed no developmental delay. We compare the input characteristics between this community and a Tseltal Mayan one in which near-parallel methods produced comparable results, then briefly discuss the models and mechanisms for learning best supported by our findings. -
Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
-
Evans, N., Levinson, S. C., & Sterelny, K. (
Eds. ). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16. -
Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (
Ed. ), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.Abstract
Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.
To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.
We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.
Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results. -
Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.
Abstract
How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal. -
Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.
Abstract
Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.
Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.
The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing. -
Levinson, S. C., & Toni, I. (2019). Key issues and future directions: Interactional foundations of language. In P. Hagoort (
Ed. ), Human language: From genes and brain to behavior (pp. 257-261). Cambridge, MA: MIT Press. -
Levinson, S. C. (2019). Interactional foundations of language: The interaction engine hypothesis. In P. Hagoort (
Ed. ), Human language: From genes and brain to behavior (pp. 189-200). Cambridge, MA: MIT Press. -
Levinson, S. C. (2019). Natural forms of purposeful interaction among humans: What makes interaction effective? In K. A. Gluck, & J. E. Laird (
Eds. ), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 111-126). Cambridge, MA: MIT Press. -
Thomaz, A. L., Lieven, E., Cakmak, M., Chai, J. Y., Garrod, S., Gray, W. D., Levinson, S. C., Paiva, A., & Russwinkel, N. (2019). Interaction for task instruction and learning. In K. A. Gluck, & J. E. Laird (
Eds. ), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 91-110). Cambridge, MA: MIT Press. -
Levinson, S. C. (1995). 'Logical' Connectives in Natural Language: A First Questionnaire. In D. Wilkins (
Ed. ), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 61-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513476.Abstract
It has been hypothesised that human reasoning has a non-linguistic foundation, but is nevertheless influenced by the formal means available in a language. For example, Western logic is transparently related to European sentential connectives (e.g., and, if … then, or, not), some of which cannot be unambiguously expressed in other languages. The questionnaire explores reasoning tools and practices through investigating translation equivalents of English sentential connectives and collecting examples of “reasoned arguments”. -
Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (
Ed. ), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press. -
Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (
Ed. ), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press. -
Wilkins, D., Pederson, E., & Levinson, S. C. (1995). Background questions for the "enter"/"exit" research. In D. Wilkins (
Ed. ), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 14-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003935.Abstract
How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This document outlines topics concerning the investigation of “enter” and “exit” events. It helps contextualise research tasks that examine this domain (see 'Motion Elicitation' and 'Enter/Exit animation') and gives some pointers about what other questions can be explored.
Share this page