Can a network with a single hidden layer of $N$ neurons (where $N\le\infty$) approximate any arbitrary function so that this network’s error approaches $0$ as $N$ approaches $\infty$?
The question asks about “arbitrary functions” and “any problem”; the accepted answer talks only about continuous functions.
The answer to the question as stated now, in both versions, is clearly “no”. Some fun counterexamples:
- “Any problem” includes Turing’s Entscheidungsproblem, which is famously unsolvable.
- “Arbitrary functions”, if you wish to go to a more “mathematical” class of problems, can be very weird. The Ackermann Function is a nice example for a function with a relatively benign definition which can be calculated readily by any kid with basic maths skills (and a lot of time) but which grows at a huge rate (much faster than exponential). It is not primitive recursive, that is, it cannot be computed by a program which consists only of
forloops, i.e. where the number of iteration for every loops is known at the beginning of the loop. A neural net with its very simple structure of linear summing neurons and some multiplications is restricted to primitive-recursive functions (at least over an unbounded domain) and cannot approximate that.
- There are discontinous functions which certainly work well as a counter-example too, for example the Dirichlet function.
- As per the comments, and to come down to earth a bit, a simple
sinfunction will do the job of providing a counterexample regarding UATs, as well.