Antje Meyer

Presentations

Displaying 1 - 12 of 12
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2021). Lexical stress perception is influenced by seeing a talker’s gesture, but not face. Talk presented at the 19th Annual Auditory Perception, Cognition and Action Meeting (APCAM 2021). Virtual meeting. 2021-11-04.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). The role of visual articulatory vs. gestural cues in audiovisual lexical stress perception. Talk presented at DGfS-Workshop: Visual Communication. New Theoretical and Empirical Developments (ViCom 2022). Virtual meeting. 2022-02-23 - 2022-02-25.
  • Creemers, A., & Meyer, A. S. (2021). Depth of processing influences referential ambiguity resolution. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), (virtual conference).
  • Hintz, F., Wolf, M. C., Rowland, C. F., & Meyer, A. S. (2021). Evidence for shared knowledge and access processes across comprehension and production: Literacy enhances spoken word comprehension and word production. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), Paris, France.
  • Hintz, F., Voeten, C. C., Isakoglou, C., McQueen, J. M., & Meyer, A. S. (2021). Individual differences in language ability: Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 34th Annual CUNY Conference on Human Sentence Processing (CUNY 2021). Philadelphia, USA. 2021-03-04 - 2021-03-06.
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Lexical selection in spoken production: A web-based study of the effects of semantic context and name agreement in multi-word production. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), (virtual conference).
  • Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2021). Sentences modulate the low-frequency neural encoding of words. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), (virtual conference).
  • Tourtouri, E. N., & Meyer, A. S. (2021). Ordering adjectives with(out) restrictions. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), Paris, France.
  • Tourtouri, E. N., & Meyer, A. S. (2021). Verbs that are produced late are accessed early: Evidence from Dutch present perfect. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), Paris, France.
  • Bosker, H. R., Meyer, A. S., & Maslowski, M. (2020). When speech cues are not integrated immediately: Evidence from the global speech rate effect. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2020). Communicative intentions influence memory for conversations. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2020). Answers are remembered better than the questions themselves. Poster presented at the Experimental Psychology Society (EPS) Meeting, Kent, Canterbury.

    Abstract

    When we communicate, we often use language to identify and successfully transmit new information. We can highlight new and important information by focussing it through pitch, syntactic structure, or semantic content. Previous work has shown that focussed information is remembered better than neutral or unfocussed information. However, most of this work has used structures, like clefts and pseudo-clefts, that are rarely found in communication. We used spoken question-answer pairs, a frequent structure where the answers are focussed relative to the questions, to examine whether answers are remembered better than questions. On each trial, participants (n=48) saw three pictures on the screen while listening to a recorded question-answer exchange between two people, such as “What should move under the crab? – The sunflower!”. In an online Yes/No recognition memory test on the next day, participants recognised the names of pictures that appeared as answers 6% more accurately than the names of pictures that appeared as questions (β = 0.27, Wald z = 4.51, 95% CI = 0.15, 0.39, p = < 0.001). Thus, linguistic focus affected memory for the words of an overheard conversation. We discuss the methodological and theoretical implications of the findings for studies of conversation.

    Additional information

    https://osf.io/w72r4/

Share this page