Iconic (toy) models of neural networks

My physics professors in grad school, as well as the Nobel laureate Feynman, would always present what they called toy models to illustrate basic concepts and methods in physics, such as the harmonic oscillator, pendulum, spinning top, and black box.

What toy models are used to illustrate the basic concepts and methods underlying the application of neural networks? (Please provide references.)

By a toy model I mean a particularly simple, minimally sized network applied to a highly constrained problem through which basic methods can be presented and one’s understanding tested and enhanced through actual implementation, i.e., constructing the basic code and preferably to a certain degree doing/checking the basic math by hand or aided by a symbolic math app.


One of the most classical is the Perceptron in 2 dimensions, which goes back to the 1950s. This is a good example because it is a launching pad for more modern techniques:

1) Not everything is linearly separable (hence the need for nonlinear activations or kernel methods, multiple layers, etc.).

2) The Perceptron won’t converge if the data is not linearly separable (continuous measures of separation such as softmax, learning rate decay, etc.).

3) While there are infinitely many solutions to splitting data, it’s clear that some are more desired than others (maximum boundary separation, SVMs, etc.)

For multilayer neural networks, you might like the toy classification examples that come with this visualization.

For Convolutional Neural Nets, the MNIST is the classical gold standard, with a cute visualization here and here.

For RNNs, a really simple problem they can solve is binary addition, which requires memorizing 4 patterns.

Source : Link , Question Author : Tom Copeland , Answer Author : Sycorax

Leave a Comment