# Can a neural network with only $1$ hidden layer solve any problem?

Can a network with a single hidden layer of $$N$$ neurons (where $$N\le\infty$$) approximate any arbitrary function so that this network’s error approaches $$0$$ as $$N$$ approaches $$\infty$$?

• “Arbitrary functions”, if you wish to go to a more “mathematical” class of problems, can be very weird. The Ackermann Function is a nice example for a function with a relatively benign definition which can be calculated readily by any kid with basic maths skills (and a lot of time) but which grows at a huge rate (much faster than exponential). It is not primitive recursive, that is, it cannot be computed by a program which consists only of for loops, i.e. where the number of iteration for every loops is known at the beginning of the loop. A neural net with its very simple structure of linear summing neurons and some multiplications is restricted to primitive-recursive functions (at least over an unbounded domain) and cannot approximate that.
• As per the comments, and to come down to earth a bit, a simple sin function will do the job of providing a counterexample regarding UATs, as well.