# RNN vs Kalman filter : learning the underlying dynamics?

Being recently interested in Kalman filters and Recurrent neural networks, it appears to me that the two are closely related, yet I can’t find relevant enough litterature :

In a Kalman filter, the set of equations is :
$$xk=Axk−1+Buk+wk−1x_{k} = Ax_{k-1} + Bu_{k} + w_{k-1}$$
$$zk=Hxk+vk z_k = Hx_k + v_k$$

with $$xx$$ the state and $$zz$$ the measurement.

In an Elman RNN (from here), the relation between the layers is:
$$hk=σh(Uhk−1+Wxk+b)h_{k} = \sigma_h (Uh_{k-1} + Wx_{k} + b)$$
$$yk=σy(Vhk+c) y_k = \sigma_y (Vh_k + c)$$

with $$xx$$ the input layer, $$hh$$ the hidden layer and $$yy$$ the output layer and $$σ\sigma$$ are the activation functions for the layers.

It’s clear that the two set of equations are the same, modulo the activations. The analogy here seems to be the following. The output layer corresponds to the measured state, the hidden layer is the true state, driven by a process $$xx$$ which is the input layer.

• First question : is the analogy viable ? And how can we interpret the activations ?

• Second question : in a Kalman filter the $$AA$$ matrix is that of the underlying dynamics of the state $$xx$$. Since training a RNN allows to learn the $$WW$$ matrices, are RNN able to learn the dynamics of the underlying state ? Ie once my RNN is trained, can I look at the coefficients of my network to guess the dynamics behind my data ?

(I’m going to try to do the experiment on artificially generated data, to see if this works, and will update as soon as it’s done)

EDIT : I wish I had access to this paper

## Answer

Yes indeed they are related because both are used to predict $$yny_{n}$$ and $$sns_{n}$$ at time step n based on some current observation $$xnx_{n}$$ and state $$sn−1s_{n-1}$$ i.e. they both represent a function $$FF$$ such that $$F(xn,sn−1)=(yn,sn)F(x_{n}, s_{n-1}) = (y_{n}, s_{n})$$
The advantage of the RNN over Kalman filter is that the RNN architecture can be arbitrarily complex (number of layers and neurons) and its parameters are learnt, whereas the algorithm (including its parameters) of Kalman filter is fixed.

Recurrent Neural Networks are more general than Kalman filter. One could actually train a RNN to simulate a Kalman filter.
Neural nets are kind of black box models and weights and activations are very often not interpretable (above all in the deeper layers).

In the end neural nets are only optimized to make the best predictions and not to have “interpretable” parameters.
Nowadays if you work on time series, have enough data and want the best accuracy, RNN is the preferred approach.

Attribution
Source : Link , Question Author : Naptzer , Answer Author : Ismael EL ATIFI