Can a neural network with only $1$ hidden layer solve any problem?

Can a network with a single hidden layer of $N$ neurons (where $N\le\infty$) approximate any arbitrary function so that this network’s error approaches $0$ as $N$ approaches $\infty$?

Answer

The question asks about “arbitrary functions” and “any problem”; the accepted answer talks only about continuous functions.

The answer to the question as stated now, in both versions, is clearly “no”. Some fun counterexamples:

  • “Any problem” includes Turing’s Entscheidungsproblem, which is famously unsolvable.
  • “Arbitrary functions”, if you wish to go to a more “mathematical” class of problems, can be very weird. The Ackermann Function is a nice example for a function with a relatively benign definition which can be calculated readily by any kid with basic maths skills (and a lot of time) but which grows at a huge rate (much faster than exponential). It is not primitive recursive, that is, it cannot be computed by a program which consists only of for loops, i.e. where the number of iteration for every loops is known at the beginning of the loop. A neural net with its very simple structure of linear summing neurons and some multiplications is restricted to primitive-recursive functions (at least over an unbounded domain) and cannot approximate that.
  • There are discontinous functions which certainly work well as a counter-example too, for example the Dirichlet function.
  • As per the comments, and to come down to earth a bit, a simple sin function will do the job of providing a counterexample regarding UATs, as well.

Attribution
Source : Link , Question Author : Snowflake , Answer Author : AnoE

Leave a Comment