Manipulating word awareness dissociates feed-forward from feedback models of language-perception interactions
Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of
processing these interactions occur. Here we aim to dissociate between two competing models of language–perception
interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions
on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. “rise,” “fall”),
directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward
information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition,
participants remained faster and more accurate when the direction implied by the motion word was congruent with the
direction of the visual motion stimulus. This suggests that language–perception interactions are driven by the feed-forward
convergence of linguistic and perceptual information at higher-level conceptual and decision stages.
Share this page