Goal: To find a low dimensional representation of the data
In general, the data does not lie perfectly on a linear subspace. In this case, some information is lost when the data is compressed. The problem here is to find the compression direction that results in the least amount of information that is lost.
The l1 direction corresponds to
An autoassociative network is a network whose inputs and targets are the same. That is, the net must find a mapping from an input to itself.
Why do this? Well, when the number of hidden nodes is smaller than the number of input node, the network is forced to learn an efficient low dimensional representation of the data.
Trained on randomly selected patches of an image (150,000 training steps). It was then tested on the entire image patch by patch using the entire set of non overlapping patches See "Fundamentals of Artificial Neural Networks", Hassoun, pp247-253.