We have implemented convolutional two stream network for action recognition for two cases -
- Two stream average fusion at softmax layer.
- Two stream fusion at convolutional layer.
We have used UCF101 dataset for this project. For utilization of temporal information, we have used optical flow images and RGB frames for utilizing spatial information. The pre-processed RGB frames and flow images can be downloaded from feichtenhofer/twostreamfusion)
- RGB images
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_jpegs_256.zip.001
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_jpegs_256.zip.002
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_jpegs_256.zip.003
cat ucf101_jpegs_256.zip* > ucf101_jpegs_256.zip
unzip ucf101_jpegs_256.zip
- Optical Flow
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.001
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.002
wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.003
cat ucf101_tvl1_flow.zip* > ucf101_tvl1_flow.zip
unzip ucf101_tvl1_flow.zip
For both the cases we have used a stack of 10 RGB frames as input for spatial stream and a stack of 50 optical flow frames as input for temporal stream. So, for a batch size = 4, typical spatial loader will look like the image below -
We have used vgg-19 model pre-trained on ImageNet for both the streams.
- Note :- To do weight transformation for first layers of ConvNets, we first average the weight value across the RGB channels and replicate this average value by the channel number in that ConvNet.
The architecture for this case has been shown in the figure below -
The Architecture for this case is shown in the Figure below - The ConvNets are being replaced be vgg model, trained on ImageNet.
- Please modify this path and this path to fit the UCF101 dataset on your device.
- If you want to change the number of frames in RGB stack, then modify here. Select the frames you want to have in the stack. If you want, you can also introduce randomness in choosing the frames for stacking.
- For first 20 classes of UCF101 dataset
Network | Acc. |
---|---|
Spatial cnn | 91.96% |
Motion cnn | 97.30% |
Average fusion | 99.01% |
- For all 101 classes of UCF101 dataset
Network | Acc. |
---|---|
Spatial cnn | 48.64% |
Motion cnn | 51.17% |
Average fusion | 62.13% |
- For first 20 classes of UCF101 dataset, we get an accuracy of 96.01 %
- For all 101 classes of UCF 101 dataset, we get an accuracy of 68.23 %