Abstract: | Traditionally, perceptual learning in humans and classical conditioning in animals have been considered as two very different
research areas, with separate problems, paradigms, and explanations. However, a number of themes common to these fields of
research emerge when they are approached from the more general concept of representational learning. To demonstrate this,
I present results of several learning experiments with human adults and infants, exploring how internal representations of
complex unknown visual patterns might emerge in the brain. I provide evidence that this learning cannot be captured fully
by any simple pairwise associative learning scheme, but rather by a probabilistic inference process called Bayesian model averaging, in which the brain is assumed to formulate the most likely chunking/grouping of its previous experience into independent
representational units. Such a generative model attempts to represent the entire world of stimuli with optimal ability to
generalize to likely scenes in the future. I review the evidence showing that a similar philosophy and generative scheme of
representation has successfully described a wide range of experimental data in the domain of classical conditioning in animals.
These convergent findings suggest that statistical theories of representational learning might help to link human perceptual
learning and animal classical conditioning results into a coherent framework. |