This post is a quick check on the timeline including historical dates in relation to the evolution of deep learning. Without further ado, let’s get to the important dates and what happened on those dates in relation to deep learning:
|Year||Details/Paper Information||Who’s who|
An artificial neuron was proposed as a computational model of the “nerve net” in the brain.
Paper: “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biophysics, volume 5, 1943
|Warren McCulloch, Walter Pitts|
A neural network application by reducing noise in phone lines was developed
Paper: Andrew Goldstein, “Bernard Widrow oral history,” IEEE Global History Network, 1997
|Bernard Widrow, Ted Hoff|
The Perceptron was introduced. It mimicked the neural structure of the brain and showed an ability to learn
Paper: Frank Rosenblatt, “The Perceptron: A probabilistic model for information storage and organization in the brain,” Psychological Review, volume 65, number 6, 1958.
Proved mathematically that the Perceptron could only perform very basic tasks. It was published in their book, “Perceptrons”. They also discussed the challenges in relation to training multi-layer neural networks.
Marvin Minsky and Seymour A. Papert, Perceptrons: An introduction to computational geometry, MIT Press, January 1969.
|Marvin Minsky, Seymour
Solved the training challenges of multi-layer neural network using back propagation training algorithm
David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, “Learning representations by back-propagating errors,” Nature, volume 323, October 1986; for a discussion of Linnainmaa’s role see Juergen Schmidhuber, Who
|Geoffrey Hinton, David Rumelhart, Ronald Williams|
Made the Use of neural networks on image recognition tasks. Defined the concept of
Yann LeCun, Patrick Haffner, Leon Botton, and Yoshua Bengio, Object recognition with gradient-based learning, Proceedings of the IEEE, November 1998.
Popularized the “Hopfield” network which was the first recurrent neural network (RNN)
|1997 – 1998||
Introduced long short-term memory (LSTM), greatly improving the efficiency and
Sepp Hochreiter and Juergen Schmidhuber, “Long short-term memory,” Neural Computation, volume 9, number 8, December 1997.
|Jurgen Schmidhuber and Sepp Hochreiter|
Highlighted the power of deep learning by showing significant results in the well-known ImageNet competition
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, ImageNet classification with deep convolutional neural networks, NIPS 12 proceedings of the 25th International Conference on Neural Information Processing Systems,
|Geoffrey Hinton and two of his students|
|2012 – 2103||
Breakthrough work on large scale image recognition at Google Brain
|Jeffrey Dean, Andrew Ng|
Ian J. Goodfellow, Generative adversarial networks, ArXiv, June 2014.
Richard S. Sutton and Andrew G. Barto, Reinforcement learning: An introduction, MIT Press, 2014.
|Richard S. Sutton, Andrew G. Barto|
- Free Datasets for Machine Learning & Deep Learning - February 22, 2021
- Actionable Insights Examples – Turning Data into Action - February 7, 2021
- When to use Deep Learning vs Machine Learning Models? - January 17, 2021