Displaying 1 - 5 of 5
-
Poletiek, F. H., Monaghan, P., van de Velde, M., & Bocanegra, B. R. (2021). The semantics-syntax interface: Learning grammatical categories and hierarchical syntactic structure through semantics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(7), 1141-1155. doi:10.1037/xlm0001044.
Abstract
Language is infinitely productive because syntax defines dependencies between grammatical categories of words and constituents, so there is interchangeability of these words and constituents within syntactic structures. Previous laboratory-based studies of language learning have shown that complex language structures like hierarchical center embeddings (HCE) are very hard to learn, but these studies tend to simplify the language learning task, omitting semantics and focusing either on learning dependencies between individual words or on acquiring the category membership of those words. We tested whether categories of words and dependencies between these categories and between constituents, could be learned simultaneously in an artificial language with HCE’s, when accompanied by scenes illustrating the sentence’s intended meaning. Across four experiments, we showed that participants were able to learn the HCE language varying words across categories and category-dependencies, and constituents across constituents-dependencies. They also were able to generalize the learned structure to novel sentences and novel scenes that they had not previously experienced. This simultaneous learning resulting in a productive complex language system, may be a consequence of grounding complex syntax acquisition in semantics. -
Lai, J., & Poletiek, F. H. (2013). How “small” is “starting small” for learning hierarchical centre-embedded structures? Journal of Cognitive Psychology, 25, 423-435. doi:10.1080/20445911.2013.779247.
Abstract
Hierarchical centre-embedded structures pose a large difficulty for language learners due to their complexity. A recent artificial grammar learning study (Lai & Poletiek, 2011) demonstrated a starting-small (SS) effect, i.e., staged-input and sufficient exposure to 0-level-of-embedding exemplars were the critical conditions in learning AnBn structures. The current study aims to test: (1) a more sophisticated type of SS (a gradually rather than discretely growing input), and (2) the frequency distribution of the input. The results indicate that SS optimally works under other conditional cues, such as a skewed frequency distribution with simple stimuli being more numerous than complex ones. -
Warmelink, L., Vrij, A., Mann, S., Leal, S., & Poletiek, F. H. (2013). The effects of unexpected questions on detecting familiar and unfamiliar lies. Psychiatry, Psychology and law, 20(1), 29-35. doi:10.1080/13218719.2011.619058.
Abstract
Previous research suggests that lie detection can be improved by asking the interviewee unexpected questions. The present experiment investigates the effect of two types of unexpected questions: background questions and detail questions, on detecting lies about topics with which the interviewee is (a) familiar or (b) unfamiliar. In this experiment, 66 participants read interviews in which interviewees answered background or detail questions, either truthfully or deceptively. Those who answered deceptively could be lying about a topic they were familiar with or about a topic they were unfamiliar with. The participants were asked to judge whether the interviewees were lying. The results revealed that background questions distinguished truths from both types of lies, while the detail questions distinguished truths from unfamiliar lies, but not from familiar lies. The implications of these findings are discussed. -
Lai, J., & Poletiek, F. H. (2011). The impact of adjacent-dependencies and staged-input on the learnability of center-embedded hierarchical structures. Cognition, 118(2), 265-273. doi:10.1016/j.cognition.2010.11.011.
Abstract
A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an AnBn grammar draw conflicting conclusions (Bahlmann and Friederici, 2006, De Vries et al., 2008). We argue that 2 conditions crucially affect learning AnBn structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars. -
Poletiek, F. H. (2011). You can't have your hypothesis and test it: The importance of utilities in theories of reasoning. Behavioral and Brain Sciences, 34(2), 87-88. doi:10.1017/S0140525X10002980.
Share this page