A deep learning library for video understanding research.
Check the website for more information.
PyTorchVideo is a deeplearning library with a focus on video understanding work. PytorchVideo provides reusable, modular and efficient components needed to accelerate the video understanding research. PyTorchVideo is developed using PyTorch and supports different deeplearning video components like video models, video datasets, and video-specific transforms.
Key features include:
- Based on PyTorch: Built using PyTorch. Makes it easy to use all of the PyTorch-ecosystem components.
- Reproducible Model Zoo: Variety of state of the art pretrained video models and their associated benchmarks that are ready to use. Complementing the model zoo, PyTorchVideo comes with extensive data loaders supporting different datasets.
- Efficient Video Components: Video-focused fast and efficient components that are easy to use. Supports accelerated inference on hardware.
Install PyTorchVideo inside a conda environment(Python >=3.7) with
pip install pytorchvideo
For detailed instructions please refer to INSTALL.md.
PyTorchVideo is released under the Apache 2.0 License.
Get started with PyTorchVideo by trying out one of our tutorials or by running examples in the tutorials folder.
We provide a large set of baseline results and trained models available for download in the PyTorchVideo Model Zoo.
Here is the growing list of PyTorchVideo contributors in alphabetical order (let us know if you would like to be added): Aaron Adcock, Amy Bearman, Bernard Nguyen, Bo Xiong, Chengyuan Yan, Christoph Feichtenhofer, Dave Schnizlein, Haoqi Fan, Heng Wang, Jackson Hamburger, Jitendra Malik, Kalyan Vasudev Alwala, Matt Feiszli, Nikhila Ravi, Ross Girshick, Tullie Murrell, Wan-Yen Lo, Weiyao Wang, Yanghao Li, Yilei Li, Zhengxing Chen, Zhicheng Yan.
We welcome new contributions to PyTorchVideo and we will be actively maintaining this library! Please refer to CONTRIBUTING.md
for full instructions on how to run the code, tests and linter, and submit your pull requests.