What’s the relation between game theory and reinforcement learning?

I’m interested in (Deep) Reinforcement Learning (RL). Before diving into this field should I take a course in Game Theory (GT)?

How are GT and RL related?

Answer

In Reinforcement Learning (RL) it is common to imagine an underlying Markov Decision Process (MDP). Then the goal of RL is
to learn a good policy for the MDP, which is often only partially specified. MDPs can have different objectives such as total, average, or discounted reward, where discounted reward is the most common assumption for RL. There are well-studied extensions of MDPs to two-player (i.e., game) settings; see, e.g.,

Filar, Jerzy, and Koos Vrieze. Competitive Markov decision processes. Springer Science & Business Media, 2012.

There is an underlying theory shared by MDPs and their extensions to two-player (zero-sum) games, including, e.g., the Banach fixed point theorem, Value Iteration, Bellman Optimality, Policy Iteration/Strategy Improvement etc. However, while there are these close connections between MDPs (and thus RL) and these specific type of games:

  • you can learn about RL (and MDPs) directly, without GT as a prerequisite;
  • anyway, you would not learn about this stuff in the majority of GT courses (which would normally be focused on, e.g., strategic-form, extensive-form, and repeated games, but not the state-based infinite games that generalize MDPs).

Attribution
Source : Link , Question Author : Kiuhnm , Answer Author : Rahul Savani

Leave a Comment