Skip to content

An official PyTorch implementation of the paper "Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences", ICCV 2021.

License

Notifications You must be signed in to change notification settings

cvlab-yonsei/LbA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch implementation of Learning by Aligning (ICCV 2021)

This is an official PyTorch implementation of the paper "Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences", ICCV 2021.

For more details, visit our project site or see our paper.

Requirements

  • Python 3.8
  • PyTorch 1.7.1
  • GPU memory >= 11GB

Getting started

First, clone our git repository.

git clone https://github.com/cvlab-yonsei/LbA.git
cd LbA

Docker

We provide a Dockerfile to help reproducing our results easily.

Prepare datasets

  • SYSU-MM01: download from this link.
    • For SYSU-MM01, you need to preprocess the .jpg files into .npy files by running:
      • python utils/pre_preprocess_sysu.py --data_dir /path/to/SYSU-MM01
    • Modify the dataset directory below accordingly.
      • L63 of train.py
      • L54 of test.py

Train

  • run python train.py --method full

  • Important:

    • Performances reported during training does not reflect exact performances of your model. This is due to 1) evaluation protocols of the datasets and 2) random seed configurations.
    • Make sure you seperately run test.py to obtain correct results to be reported in your paper.

Test

  • run python test.py --method full
  • The results should be around:
dataset method mAP rank-1
SYSU-MM01 baseline 49.54 50.43
SYSU-MM01 full 54.14 55.41

Pretrained weights

  • Download [SYSU-MM01]
  • The results should be:
dataset method mAP rank-1
SYSU-MM01 full 55.22 56.31

Bibtex

@inproceedings{park2021learning,
  title={Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences},
  author={Park, Hyunjong and Lee, Sanghoon and Lee, Junghyup and Ham, Bumsub},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={12046--12055},
  year={2021}
}

Credits

Our implementation is based on Mang Ye's code here.

About

An official PyTorch implementation of the paper "Learning by Aligning: Visible-Infrared Person Re-identification using Cross-Modal Correspondences", ICCV 2021.

Resources

License

Stars

Watchers

Forks

Packages

No packages published