The official repository for the Closed-form Sample Probing for Learning Generative Models in Zero-shot Learning paper published at ICLR 2022.
Figure: Illustration of the proposed framework for the end-to-end sample probing of conditional generative models. At each training iteration, we take synthetic training examples for some subset of seen classes (probe-train classes) from the conditional generative models, train a closed-form solvable zero-shot learning model (sample probing ZSL model) over them and evaluate it on the real examples of a different subset of seen classes (probe-validation classes). The resulting cross-entropy loss of the probing model is used as a loss term for the generative model update.
Proposed data splits for all datasets (all together with the finetuned features) can be found here. Please download and place data folder suchlike the directory structure after the placement looks like this:
.
├── data # Proposed data | ### Downloaded and placed here ###
├── images # Images used in README
├── src # Source files
├── LICENSE
└── README.md
P.S. In case of a problem of reaching data using link above, the exact same proposed splits can be found here (not including FLO and any finetuned features) and here (including all).
For the installation details, please see tfvaegan since the provided scripts (under scripts folder) train TF-VAEGAN models with and without sample probing.
Training logs of validation and test phases of the presented results in the paper can be found under logs folder.
Table: Generalized zero-shot learning scores of sample probing with alternative closed-form models, based on TF-VAEGAN baseline.
If you find this code useful in your research, please consider citing as follows:
@inproceedings{
cetin2022closedform,
title={Closed-form Sample Probing for Learning Generative Models in Zero-shot Learning},
author={Samet Cetin and Orhun Bu{\u{g}}ra Baran and Ramazan Gokberk Cinbis},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=ljxWpdBl4V}
}
The parts of the code related to generative model (TF-VAEGAN) training is taken/adapted from tfvaegan repository.
The work in this research was supported in part by the TUBITAK Grant 119E597. The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).