| In light of recent developments in connectionist natural language processing (Elman, 1990; Bates, 1995), Chomsky's (1965) "poverty of the stimulus" argument has been challenged and the existence of "language universals" ( built-in/innate grammatical knowl-edge) questioned. We investigate whether neural network structures which may represent elements of a grammar exist through chance in biologically plausible neural networks of sparse connectivity and whether these can be combined by learning to form representations of sufficient complexity for real world applications. | |