This repository contains the code for reproducing the results for paper Paying more attention to Snapshot of Iterative Pruning: Improving Model Compression via Ensemble Distillation (BMVC 2020).
In short, the paper propose to leverage the snapshots of iterative pruning to construct ensembles and distilling knowledge from them. To stimulate the diversity between each snapshots, we use One-cycle schedule to retrain the pruned networks. Thus, each snapshot is encouraged to converge to different optimal solution.
The algorithm is summarized below:
- Train the baseline network to completion.
- Prune redundant weights (based on some criteria).
- Retrain with One-cycle learning rate.
- Repeat step 2 and 3 until desired compression ratio is reached.
- Distill knowledge from ensemble to desired network.
Please checkout example.pynb
for detail instruction to reproduce the results on CIFAR. Instruction for running experiments on Tiny-Imagenet might be updated later.
We also provided the scripts for repeative pruning and knowledge distillation (read Sec.5 in Colab example). Disclamer: you might have to modify the checkpoint_paths
variable in ensemble_finetune.py
to appropriate paths (and by that I mean cifar/filter_pruning/ensemble_finetune.py
, cifar/weight_pruning/ensemble_finetune.py
,... depending on your chosen method/dataset).
PFEC and MWP stand for Pruning Filters for Efficient ConvNets and Learning both Weights and Connections for Efficient Neural Networks respectively.
The code is mostly taken from Eric-mingjie's repository