Displaying 1 - 4 of 4
-
Bosker, H. R., Meyer, A. S., & Maslowski, M. (2020). When speech cues are not integrated immediately: Evidence from the global speech rate effect. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
-
Zormpa, E., Meyer, A. S., & Brehm, L. (2020). Communicative intentions influence memory for conversations. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
-
Zormpa, E., Meyer, A. S., & Brehm, L. (2020). Answers are remembered better than the questions themselves. Poster presented at the Experimental Psychology Society (EPS) Meeting, Kent, Canterbury.
Abstract
When we communicate, we often use language to identify and successfully transmit new information. We can highlight new and important information by focussing it through pitch, syntactic structure, or semantic content. Previous work has shown that focussed information is remembered better than neutral or unfocussed information. However, most of this work has used structures, like clefts and pseudo-clefts, that are rarely found in communication. We used spoken question-answer pairs, a frequent structure where the answers are focussed relative to the questions, to examine whether answers are remembered better than questions. On each trial, participants (n=48) saw three pictures on the screen while listening to a recorded question-answer exchange between two people, such as “What should move under the crab? – The sunflower!”. In an online Yes/No recognition memory test on the next day, participants recognised the names of pictures that appeared as answers 6% more accurately than the names of pictures that appeared as questions (β = 0.27, Wald z = 4.51, 95% CI = 0.15, 0.39, p = < 0.001). Thus, linguistic focus affected memory for the words of an overheard conversation. We discuss the methodological and theoretical implications of the findings for studies of conversation.Additional information
https://osf.io/w72r4/ -
Cooke, N., Russell, M., & Meyer, A. S. (2004). Evaluation of hidden Markov models robustness in uncovering focus of visual attention from noisy eye-tracker data. Poster presented at Eye Tracking Research and Applications Symposium 2004 (ETRA 2004), San Antonio, Texas.
Abstract
A robust way to unconver the focus of visual attention from (simulated) noisy eye tracking data provided by the hidden Markov model was discussed. It was found that a hidden simi-Markov model (HSMM) with explicit state duration PDF representing task-constrained visual attention was more stable and accurate to represent visual attention duration. HSMM used an additional Gaussian component to the observation distribution PDF with larger standard deviation to ensure less differentiation between eye movement positions for away from the object. Analysis shows that HMM and HSMM performed better in terms of accuracy and instability than the baseline non-HMM method.
Share this page