Pre-Wiring and Pre-Training : What does a neural network need to learn truly general identity rules?

Alhama, R. G., & Zuidema, W. (2016). Pre-Wiring and Pre-Training: What does a neural network need to learn truly general identity rules? In T. R. Besold, A. Bordes, & A. D'Avila Garcez (Eds.), CoCo 2016 Cognitive Computation: Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016. CEUR Workshop Proceedings.
In an influential paper, Marcus et al. [1999] claimed that connectionist models
cannot account for human success at learning tasks that involved generalization
of abstract knowledge such as grammatical rules. This claim triggered a heated
debate, centered mostly around variants of the Simple Recurrent Network model
[Elman, 1990]. In our work, we revisit this unresolved debate and analyze the
underlying issues from a different perspective. We argue that, in order to simulate
human-like learning of grammatical rules, a neural network model should not be
used as a
tabula rasa
, but rather, the initial wiring of the neural connections and
the experience acquired prior to the actual task should be incorporated into the
model. We present two methods that aim to provide such initial state: a manipu-
lation of the initial connections of the network in a cognitively plausible manner
(concretely, by implementing a “delay-line” memory), and a pre-training algorithm
that incrementally challenges the network with novel stimuli. We implement such
techniques in an Echo State Network [Jaeger, 2001], and we show that only when
combining both techniques the ESN is able to learn truly general identity rules.
Publication type
Proceedings paper
Publication date
2016

Share this page