============
This project involves developing one-shot learning methods for indoor sub-scene classification. Some network visualization techniques will also be implemented.
Aim is to recognise which floor the robot is currently at.
Methods implemented:
- Siamese Network:
Uses contrastive loss
~~~
python trainSiamese.py
~~~
![Siamese net](siamese1.png )
- Modified Siamese network:
Uses identification inaddition to contrastive loss
~~~
python trainModifiedSiamese.py
~~~
![Modified Siamese net (training net)](modified_siamese1.png )
During test time a single branch with a softmax final layer is used with the trained weights.
To change the fc8 layer size, train, test and to visualize read the comments in the code
Aim is to visualize what parts of the image are important for the classification.
Methods considered:
- Occulsion heat map (siamese and modified siamese net)
- Class Saliency map (modified siamese net)
- Excitation backprop (modified siamese net)
Visualization evaluation Metrics:
- ACG
- CCG
Scripts Used:
- To train modified siamese use 'trainModifiedSiamese.py -> modifiedSiamese/SiameseTrainer.py'
- To visualize any network use 'visuModels.py -> modifiedSiamese/SiameseTrainer.py'
- To analyse visualized files (metrics) use 'visuModels.py -> modifiedSiamese/analyse_visu.py'
- To find average metrics generate metrics from 'visuModels.py -> modifiedSiamese/analyse_visu.py' and then use 'analyse_files.py'
- To generate images for the paper 'gen_img.py -> modifiedSiamese/gen_images.py'
- To generate heatmaps for a specific setting use 'visuScene.py -> modifiedSiamese/SiameseTrainer.py'
- To explain scene generate object detection from 'yolo900', generate scene visualization heatmap from 'visuModels.py -> modifiedSiamese/SiameseTrainer.py' or 'visuScene.py -> modifiedSiamese/SiameseTrainer.py', then use 'explainScene.py'