Skip to content

Latest commit

 

History

History
148 lines (122 loc) · 6.6 KB

README.md

File metadata and controls

148 lines (122 loc) · 6.6 KB

Continual Spatio-Temporal Graph Convolutional Networks

Paper License Framework Code style: black

Official codebase for "Online Skeleton-based Action Recognition with Continual Spatio-Temporal Graph Convolutional Networks", including:

  • Models: Co ST-GCN, Co AGCN, Co S-TR, and more ... (see Models section for full overview).

  • Datasets: NTU RGB+D 60, NTU RGB+D 120, and Kinetics Skeleton 400.

Abstract

Graph-based reasoning over skeleton data has emerged as a promising approach for human action recognition. However, the application of prior graph-based methods, which predominantly employ whole temporal sequences as their input, to the setting of online inference entails considerable computational redundancy. In this paper, we tackle this issue by reformulating the Spatio-Temporal Graph Convolutional Neural Network as a Continual Inference Network, which can perform step-by-step predictions in time without repeat frame processing. To evaluate our method, we create a continual version of ST-GCN, CoST-GCN, alongside two derived methods with different self-attention mechanisms, CoAGCN and CoS-TR. We investigate weight transfer strategies and architectural modifications for inference acceleration, and perform experiments on the NTU RGB+D 60, NTU RGB+D 120, and Kinetics Skeleton 400 datasets. Retaining similar predictive accuracy, we observe up to 109x reduction in time complexity, on-hardware accelerations of 26x, and reductions in maximum allocated memory of 52% during online inference.


Fig. 1: Continual Spatio-temporal Graph Convolution Blocks consist of an in-time Graph Convolution followed by an across-time Continual Convolution (here a kernel size of three is depicted). The residual connection is delayed to ensure temporal alignment with the continual temporal convolution that is weight-compatible with non-continual networks.

Fig. 2: Accuracy/complexity trade-off on NTU RGB+D 60 X-Sub for ⬥ Continual and ■ prior methods during online inference. Numbers denote streams for each method. *Architecture modification with stride one and no padding.

Setup

Installation

  • Clone this repository and enter it:
    git clone https://github.com/LukasHedegaard/continual-skeletons.git
    cd continual-skeletons
  • Optionally create and activate conda environment:
    conda create --name continual-skeletons python=3.8
    conda activate continual-skeletons
  • Install as editable module
    pip install -e .[dev]

Repository structure

The repository is s

root
|- datasets/     # Dataset loaders
|- models/       # Individual models and shared base-code
    |- ...
    |- st_gcn/       # Baseline model
    |- cost_gcn/     # Continual version of model
    |- st_gcn_mod/   # Modified baseline with stride one and no padding
    |- cost_gcn_mod/ # Continual version of modified baseline model
        |- cost_gcn_mod.py  # Python entry-point
        |- scripts/         # Scripts used to achieve results from paper. Please run from root.
            |- evaluate_ntu60.py
            |- evaluate_ntu120.py
            |- evaluate_kinetics.py
            |- ...
|- tests/     # Unit tests for custom modules
|- weights/   # Place pretrained weights here
|- preds/     # Place extracted predictions here to perform multi-stream eval
|- Makefile   # Commands for testing, linting, cleaning.
|- .env       # Modify path to your dataset here, i.e. DATASETS_PATH=/my/path

Dataset preparation

Coming up.

Models

Individual folders with relevant scripts are avilable under /models for the following models:

To see an overview of available commands for a model, check the help, e.g.:

python models/cost_gcn/cost_gcn.py --help

The commands used to produce the paper results are found in the associated scripts folder, e.g.:

python models/cost_gcn/scripts/evaluate_ntu60.py

Pretrained weights

Trained model weights are available here.

Experiments and results

To reproduce results:

  • Prepare datasets
    • Download and preprocessing guidelines coming up
    • Add DATASET_PATH=/your/dataset/path to .env.
  • Download pretrained weights and place them in ~/weights.
  • Run evaluation script. For instance, to evaluate the CoST-GCN* model on NTU RGB+D 120 and save its predictions, the command would be:
    python models/cost_gcn/scripts/evaluate_ntu120.py

Benchmark

NTU RGB+D 60

NTU RGB+D 120

Kinetics Skeletons 400

Citation

@article{hedegaard2021costgcn,
  title={Online Skeleton-based Action Recognition with Continual Spatio-Temporal Graph Convolutional Networks},
  author={Lukas Hedegaard and Negar Heidari and Alexandros Iosifidis},
  journal={preprint, arXiv: 2203.11009}, 
  year={2022}
}