# Using RNN (LSTM) for predicting the timeseries vectors (Theano)

I have very simple problem but I cannot find a right tool to solve it.

I have some sequence of vectors of the same length. Now I would like to train LSTM RNN on train sample of these sequences and then make it to predict new sequence of vectors of length $n$ based on several priming vectors.

I cannot find simple implementation which would done this. My base language is Python, but anything what doesn’t install for days will hold.

I tried to use Lasagne, but implementation of RNN is not ready yet and it’s in separated package nntools. Anyway, I tried the latter one but I can’t figure out how to train it, then prime it by some test vectors and let it predict the newone(s). Blocks are the same problem – no documentation is available for LSTM RNN, although it seems that there are some classes and functions which could work (e.g. blocks.bricks.recurrent).

There are several implementation of RNN LSTM in Theano, like GroundHog, theano-rnn, theano_lstm and code for some papers, but non of those have tutorial or guide how to do what I want.

The only usable solution I’ve found was using Pybrain. But unfortunately it lacks the features of Theano (mainly GPU computation) and is orphaned (no new features and support).

Does anyone know where I could find what I’m asking for? Easy to work with RNN LSTM for predicting sequences of vectors?

Edit:

I tried Keras like this:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM

model = Sequential()
model.regularizers = []
model(LSTM(256, 128, activation='sigmoid',
inner_activation='hard_sigmoid'))
model.compile(loss='mean_squared_error', optimizer='rmsprop')


but I’m getting this error when trying to fit it model.fit(X_train, y_train, batch_size=16, nb_epoch=10)

IndexError: index 800 is out of bounds for axis 1 with size 12


while X_train and y_train are arrays of arrays (of length 12), e.g. [[i for i in range(12)] for j in range(1000)]

I finally found a way and documented it on my blog here.

There is comparison of several frameworks and then also one implementation in Keras.