Are experts harmful?

I’m reading “The Role of Chess in Artificial Intelligence Research” (pdf) and interestingly, it says:

Experience […] suggests that inputs from chess experts, while generally useful, cannot be trusted completely.

A good example of this is Deep Thought’s evaluation function. Several changes by capable human chess experts failed to produce significant improvements and occasionally even affected the machine’s performance negatively.

Here, human experts, along with their expertise, introduced their own
prejudices into the program. One way of solving this problem is to
limit the type and the amount of expert inputs allowed into the
program; in other having an almost “knowledge-free” machine.

  • How true is that in modern research and practice?
  • Is that a big problem, or just something specific to the game of chess?

Answer

I think this is more about Engineering Problem Solving. Most successful engineering projects do not duplicate expert reasoning or or the expert’s nature exactly. They solved the problem in a different way.

For example washing machines use a different technique than humans, airplanes use different dynamics than birds.

If you are duplicating Expert Reasoning, their input is everything. But if you are solving same problem using different techniques (fast search, huge memory …), their input is only helpful.

Attribution
Source : Link , Question Author : andreister , Answer Author : Glen_b

Leave a Comment