Version: v1, Published online: 1998
Retrieved August 14, 2020, from https://www.rep.routledge.com/articles/thematic/connectionism/v-1
Connectionism is an approach to computation that uses connectionist networks. A connectionist network is composed of information-processing units (or nodes); typically, many units process information simultaneously, giving rise to massively ‘parallel distributed processing’. Units process information only locally: they respond only to their specific input lines by changing or retaining their activation values; and they causally influence the activation values of their output units by transmitting amounts of activation along connections of various weights or strengths. As a result of such local unit processing, networks themselves can behave in rule-like ways to compute functions.
The study of connectionist computation has grown rapidly since the early 1980s and now extends to every area of cognitive science. For the philosophy of psychology, the primary interest of connectionist computation is its potential role in the computational theory of cognition – the theory that cognitive processes are computational. Networks are employed in the study of perception, memory, learning and categorization; and it has been claimed that connectionism has the potential to yield an alternative to the classical view of cognition as rule-governed symbol manipulation.
Since cognitive capacities are realized in the central nervous system, perhaps the most attractive feature of the connectionist approach to cognitive modelling is the neural-like aspects of network architectures. The members of a certain family of connectionist networks, artificial neural networks, have proved to be a valuable tool for investigating information processing within the nervous system. In artificial neural networks, units are neuron-like; connections, axon-like; and the weights of connections function in ways analogous to synapses.
Another attraction is that connectionist networks, with their units sensitive to varying strengths of multiple inputs, carry out in natural ways ‘multiple soft constraint satisfaction’ tasks – assessing the extent to which a number of non-mandatory, weighted constraints are satisfied. Tasks of this sort occur in motor-control, early vision, memory, and in categorization and pattern recognition. Moreover, typical networks can re-programme themselves by adjusting the weights of the connections among their units, thereby engaging in a kind of ‘learning’; and they can do so even on the basis of the sorts of noisy and/or incomplete data people typically encounter.
The potential role of connectionist architectures in the computational theory of cognition is, however, an open question. One possibility is that cognitive architecture is a ‘mixed architecture’, with classical and connectionist modules. But the most widely discussed view is that cognitive architecture is thoroughly connectionist. The leading challenge to this view is that an adequate cognitive theory must explain high-level cognitive phenomena such as the systematicity of thought (someone who can think ‘The dog chases the cat’ can also think ‘The cat chases the dog’), its productivity (our ability to think a potential infinity of thoughts) and its inferential coherence (people can infer ‘p’ from ‘p and q’). It has been argued that a connectionist architecture could explain such phenomena only if it implements a classical, language-like symbolic architecture. Whether this is so, however, and, indeed, even whether there are such phenomena to be explained, are currently subjects of intense debate.
McLaughlin, Brian P.. Connectionism, 1998, doi:10.4324/9780415249126-W010-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/connectionism/v-1.
Copyright © 1998-2020 Routledge.