We provide our implementation of skeleton based action recognition using recurrent neural networks (RNN). The scripts are for NTU RGB+D dataset (https://github.com/shahroudy/NTURGB-D), which is the largest dataset for this task.
For more details, you can refer to our paper, Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks, Hongsong Wang, Liang Wang, The Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
Our code is based on Lasagne (http://lasagne.readthedocs.io/en/latest/).
Note that the scripts are for the temporal RNN, the spatial RNN and two-stream RNN can be implementated in similar way.