Displaying 1 - 11 of 11
-
Harbusch, K., Kempen, G., & Vosse, T. (2008). A natural-language paraphrase generator for on-line monitoring and commenting incremental sentence construction by L2 learners of German. In Proceedings of WorldCALL 2008.
Abstract
Certain categories of language learners need feedback on the grammatical structure of sentences they wish to produce. In contrast with the usual NLP approach to this problem—parsing student-generated texts—we propose a generation-based approach aiming at preventing errors (“scaffolding”). In our ICALL system, students construct sentences by composing syntactic trees out of lexically anchored “treelets” via a graphical drag&drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree, and intervenes immediately when the latter tree does not belong to the set of well-formed alternatives. Feedback is based on comparisons between the student-composed tree and the well-formed set. Frequently occurring errors are handled in terms of “malrules.” The system (implemented in JAVA and C++) currently focuses constituent order in German as L2. -
Kempen, G., & Harbusch, K. (2008). Comparing linguistic judgments and corpus frequencies as windows on grammatical competence: A study of argument linearization in German clauses. In A. Steube (
Ed. ), The discourse potential of underspecified structures (pp. 179-192). Berlin: Walter de Gruyter.Abstract
We present an overview of several corpus studies we carried out into the frequencies of argument NP orderings in the midfield of subordinate and main clauses of German. Comparing the corpus frequencies with grammaticality ratings published by Keller’s (2000), we observe a “grammaticality–frequency gap”: Quite a few argument orderings with zero corpus frequency are nevertheless assigned medium–range grammaticality ratings. We propose an explanation in terms of a two-factor theory. First, we hypothesize that the grammatical induction component needs a sufficient number of exposures to a syntactic pattern to incorporate it into its repertoire of more or less stable rules of grammar. Moderately to highly frequent argument NP orderings are likely have attained this status, but not their zero-frequency counterparts. This is why the latter argument sequences cannot be produced by the grammatical encoder and are absent from the corpora. Secondly, we assumed that an extraneous (nonlinguistic) judgment process biases the ratings of moderately grammatical linear order patterns: Confronted with such structures, the informants produce their own “ideal delivery” variant of the to-be-rated target sentence and evaluate the similarity between the two versions. A high similarity score yielded by this judgment then exerts a positive bias on the grammaticality rating—a score that should not be mistaken for an authentic grammaticality rating. We conclude that, at least in the linearization domain studied here, the goal of gaining a clear view of the internal grammar of language users is best served by a combined strategy in which grammar rules are founded on structures that elicit moderate to high grammaticality ratings and attain at least moderate usage frequencies. -
Vosse, T. G., & Kempen, G. (2008). Parsing verb-final clauses in German: Garden-path and ERP effects modeled by a parallel dynamic parser. In B. Love, K. McRae, & V. Sloutsky (
Eds. ), Proceedings of the 30th Annual Conference on the Cognitive Science Society (pp. 261-266). Washington: Cognitive Science Society.Abstract
Experimental sentence comprehension studies have shown that superficially similar German clauses with verb-final word order elicit very different garden-path and ERP effects. We show that a computer implementation of the Unification Space parser (Vosse & Kempen, 2000) in the form of a localist-connectionist network can model the observed differences, at least qualitatively. The model embodies a parallel dynamic parser that, in contrast with existing models, does not distinguish between consecutive first-pass and reanalysis stages, and does not use semantic or thematic roles. It does use structural frequency data and animacy information. -
Kempen, G. (1996). Computational models of syntactic processing in human language comprehension. In T. Dijkstra, & K. De Smedt (
Eds. ), Computational psycholinguistics: Symbolic and subsymbolic models of language processing (pp. 192-220). London: Taylor & Francis. -
Kempen, G. (1996). "De zwoele groei van den zinsbouw": De wonderlijke levende grammatica van Jac. van Ginneken uit De Roman van een Kleuter (1917). Bezorgd en van een nawoord voorzien door Gerard Kempen. In A. Foolen, & J. Noordegraaf (
Eds. ), De taal is kennis van de ziel: Opstellen over Jac. van Ginneken (1877-1945) (pp. 173-216). Münster: Nodus Publikationen. -
Kempen, G. (1996). “De zwoele groei van de zinsbouw”: De wonderlijke levende grammatica van Jac. van Ginneken uit 'De roman van een kleuter' (1917). In A. Foolen, & J. Noordegraaf (
Eds. ), De taal is kennis van de ziel. Opstellen over Jac. van Ginneken (1877–1945) (pp. 173-216). Münster: Nodus.Files private
Request files -
Kempen, G. (1996). Human language technology can modernize writing and grammar instruction. In COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2 (pp. 1005-1006). Stroudsburg, PA: Association for Computational Linguistics.
-
Kempen, G. (1996). Lezen, leren lezen, dyslexie: De auditieve basis van visuele woordherkenning. Nederlands Tijdschrift voor de Psychologie, 51, 91-100.
-
Kempen, G., & Janssen, S. (1996). Omspellen: Reuze(n)karwei of peule(n)schil? In H. Croll, & J. Creutzberg (
Eds. ), Proceedings of the 5e Dag van het Document (pp. 143-146). Projectbureau Croll en Creutzberg. -
Kempen, G. (1996). Wetenschap op internet: Een voorstel voor de Nederlandse Psychonomie. Nieuwsbrief Nederlandse Vereniging voor Psychonomie, 3, 5-8.
-
De Smedt, K., & Kempen, G. (1996). Discontinuous constituency in Segment Grammar. In H. C. Bunt, & A. Van Horck (
Eds. ), Discontinuous constituency (pp. 141-163). Berlin: Mouton de Gruyter.
Share this page