This repository reuses Matterport's Mask RCNN implementation. Kindly refer their repository here for further details on the core implementation. The pretrained weights were obtained from crowdAI's implementation found here.
mAP | F1 |
---|---|
30.4 | 37.9 |
You can download the data via the following links:
Training data imagery:
aws s3 cp s3://spacenet-dataset/SpaceNet_Off-Nadir_Dataset/SpaceNet-Off-Nadir_Train/ . --exclude "*geojson.tar.gz" --recursive
Training data labels:
aws s3 cp s3://spacenet-dataset/SpaceNet_Off-Nadir_Dataset/SpaceNet-Off-Nadir_Train/geojson.tar.gz .
Refer the SpaceNet Off Nadir challenge page (link) for more details
Latest weights can be found here
Sample results of using this model on Nadir (left, nadir angle=13°), Off Nadir (center, nadir angle=27°) and Very Off Nadir (right, nadir angle=50°) images are shown below.
- Currently, the model requires the training data to be in jpg. By default, the images in the SpaceNet dataset are in geotiff. You can do the conversion via
gdal_translate
from the GDAL library. - Expected data format: MS COCO
- There is some issue with using the default cocoeval.py script for evaluating this dataset. Refer this notebook for calculating metrics.
Be aware of area ranges and max detections in cocoeval.py
. By default, the cocoeval script has the following configuration:
# Area Ranges
[[0 ** 2, 1e5 ** 2], # all
[0 ** 2, 32 ** 2], # small
[32 ** 2, 96 ** 2], # medium
[96 ** 2, 1e5 ** 2]] # large
# Max Detections
[1, 10, 100]
These area ranges and max detection settings might be appropriate for natural images (as in the COCO dataset) but tis not the case for satellite images. Objects in satellite images are generally smaller and much more numerous. Depending on your use case and your test set, you might need to alter these params accordingly for a better evaluation of your model.