A set of projects that illustrate different approaches to Optical Flow.
Optical Flow explores three techniques to tackle the tracking problem:
-
Feature_Tracking.ipynb illustrates how to detect and track features across consecutive images.
-
Sparse_Optical_Flow.ipynb illustrates how to use sparse optical flow on images and videos.
-
Dense_Optical_Flow.ipynb illustrates how to use dense optical flow on images and videos.
FlowNet illustrates Deep Learning for Optical Flow by implementing the FlowNet algorithm using PyTorch and training the models on the KITTI dataset. The goal is to output the optical flow of two images.
RAFT explores the RAFT deep network architecture for optical flow.
Here is the same video of the skateboarder as used above to illustrate sparse and dense optical flow, this time using RAFT:
Visual SLAM shows an example of Visual SLAM (Simultaneous Localization and Mapping) using visual features.
The KITTI Vision Benchmark Suite
The stereo 2015 / flow 2015 / scene flow 2015 benchmark consists of 200 training scenes and 200 test scenes (4 color images per scene, saved in lossless png format).
Geiger, A, P. Lenz, R. URtasun, 2015. Optical Flow Evaluation 2015. The KITTI Vision Benchmark Suite: A project of Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago.
HD1K Benchmark Suite
The HD1K Benchmark Suite is an autonomous driving dataset and benchmark for optical flow. The public training dataset contains:
- More than 1000 frames at 2560x1080 with diverse lighting and weather scenarios
- reference data with error bars for optical flow
- evaluation masks for dynamic objects
- specific robustness evaluation on challenging scenes
The "Flying Chairs" Dataset
The "Flying Chairs" are a synthetic dataset with optical flow ground truth. It consists of 22872 image pairs and corresponding flow fields. Images show renderings of 3D chair models moving in front of random backgrounds from Flickr. Motions of both the chairs and the background are purely planar.
Scene Flow Datasets: FlyingThings3D, Driving, Monkaa
The Scene Flow Datasets collection contains more than 39000 stereo frames in 960x540 pixel resolution, rendered from various synthetic sequences. Mayer, et al. (2016) reference and describe these datasets.
MPI Sintel Flow Dataset
The MPI Sintel Flow Dataset is a data set for the evaluation of optical flow derived from the open source 3D animated short film, Sintel.
Papers
- Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D. and Brox, T., 2015. FlowNet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision (pp. 2758-2766).
- Teed, Z. and Deng, J., 2020, August. RAFT: Recurrent all-pairs field transforms for optical flow. In European conference on computer vision (pp. 402-419). Springer, Cham.
- Okafuji, Y. and Fukao, T., 2021. Theoretical interpretation of drivers’ gaze strategy influenced by optical flow. Scientific reports, 11(1), 2389 (2021) pp.1-14. https://doi.org/10.1038/s41598-021-82062-1
- Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M. and Baik, S.W., 2017. Action recognition in video sequences using deep bi-directional LSTM with CNN features. IEEE access, 6, pp.1155-1166 Vancouver.
- Zhu, Y., Lan, Z., Newsam, S. and Hauptmann, A., 2018, December. Hidden two-stream convolutional networks for action recognition. In Asian conference on computer vision (pp. 363-378). Springer, Cham.
- Wikipedia. Taylor Series.
- Wikipedia. Lucas–Kanade method.
- Wikipedia. Video detection and ranging (VIDAR).
- Chuan-en Lin, 2019. Introduction to Motion Estimation with Optical Flow.
- Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A. and Brox, T., 2017. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2462-2470).
- Ziyun Li, 2017. A Brief Review of FlowNet. towardsdatascience.
- Ranjan, A. and Black, M.J., 2017. Optical flow estimation using a spatial pyramid network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4161-4170).
- Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A. and Brox, T., 2016. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4040-4048).
Courses
- Fei‐Fei Li, 2011. Tracking motion features – optical flow. Stanford Vision Lab.
- Derek Hoiem. Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549, University of Illinois.
Software Implementations
- Ruoteng Li 李若腾. Optical Flow Toolkit.
- Clement Pinard. FlowNetPytorch. FlowNet implementation in PyTorch.
- Jeremy Cohen. Master Optical Flow. Think Autonomous course on Optical Flow.
- Zachary Teed and Jia Deng. Source Code from the paper RAFT: Recurrent All Pairs Field Transforms for Optical Flow.
- Chuan-en Lin. Source code for the article 'Motion Estimation with Optical Flow'