Skip to content

A model compression and acceleration toolbox based pytorch.

License

Notifications You must be signed in to change notification settings

Flying-Cloud/Sparsebit

 
 

Repository files navigation

Introduction

Sparsebit is a toolkit with pruning and quantization capabilities. It is designed to help researchers compress and accelerate neural network models by modifying only a few codes in existing pytorch project.

Quantization

Quantization turns full-precision params into low-bit precision params, which can compress and accelerate the model without changing its structure. This toolkit supports two common quantization paradigms, Post-Training-Quantization and Quantization-Aware-Training, with following features:

  • Benefiting from the support of torch.fx, Sparsebit operates on a QuantModel, and each operation becomes a QuantModule.
  • Sparsebit can easily be extended by users to accommodate their own researches. Users can register to extend important objects such as QuantModule, Quantizer and Observer by themselves.
  • Exporting QDQ-ONNX is supported, which can be loaded and deployed by backends such as TensorRT and OnnxRuntime.

Pruning

About to released.

Resources

Documentations

Detailed usage and development guidance is located in the document. Refer to: docs

CV-Master

  • We maintain a public course on quantification at Bilibili, introducing the basics of quantification and our latest work. Interested users can join the course.video
  • Aiming at better enabling users to understand and apply the knowledge related to model compression, we designed related homework based on Sparsebit. Interested users can complete it by themselves.quantization_homework

Join Us

  • Welcome to be a member (or an intern) of our team if you are interested in Quantization, Pruning, Distillation, Self-Supervised Learning and Model Deployment.
  • Submit your resume to: sunpeiqin@megvii.com

Acknowledgement

Sparsebit was inspired by several open source projects. We are grateful for these excellent projects and list them as follows:

License

Sparsebit is released under the Apache 2.0 license.

About

A model compression and acceleration toolbox based pytorch.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.4%
  • Cuda 12.3%
  • C++ 1.3%