What is the origin of the autoencoder neural networks?

I searched on Google, Wikipedia, Google scholar, and more, but I could not find the origin of Autoencoders. Perhaps it’s one of those concepts that evolved very gradually, and it’s impossible to trace back a clear starting point, but still I would like to find some kind of summary of the main steps of their development.

The chapter about autoencoders in Ian Goodfellow, Yoshua Bengio and Aaron Courville’s Deep Learning book says:

The idea of autoencoders has been part of the historical landscape of
neuralnetworks for decades (LeCun, 1987; Bourlard and Kamp, 1988;
Hinton and Zemel,1994). Traditionally, autoencoders were used for
dimensionality reduction or feature learning.

This presentation by Pascal Vincent says:

Denoising using classical autoencoders was actually introduced much
earlier (LeCun, 1987; Gallinari et al., 1987), as an alternative to
Hopfield networks (Hopfield, 1982).

This seems to imply that “classical autoencoders” existed before that: LeCun and Gallinari used them but did not invent them. I see no trace of “classical autoencoders” earlier than 1987.

Any ideas?


According to the history provided in Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ballard, “Modular learning in neural networks,” Proceedings AAAI (1987). It’s not clear if that’s the first time auto-encoders were used, however; it’s just the first time that they were used for the purpose of pre-training ANNs.

As the introduction to the Schmidhuber article makes clear, it’s somewhat difficult to attribute all of the ideas used in ANNs because the literature is diverse and terminology has evolved over time.

Source : Link , Question Author : MiniQuark , Answer Author : Sycorax

Leave a Comment