This repository includes the codes and pre-trained models for the paper: "From skeleton sequences to enhanced motion maps: a new representation for 3D human action recognition with deep convolutional neural networks" by Huy Hieu Pham, Houssam Salmane, Louahdi Khoudour, Alain Crouzil, Pablo Zegers, and Sergio A. Velastin, which was submitted to the 16th International Conference on Image Analysis and Recognition, 2019.
Requirements
To train the deep neural networks in our experiments, you need to download and install Keras with TensorFlow as the backend and put image data to the folders. Then navigate to the working folder and start to train DenseNets on GPU by the following commande:
python file_name.py
For example:
python DenseNet-MSR-Action3D-AS1.py
Experimental results
(will be updated as soon as possible upon publication.)