Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MONAILabel App Exploration #1

Open
5 tasks done
che85 opened this issue Jun 4, 2021 · 6 comments
Open
5 tasks done

MONAILabel App Exploration #1

che85 opened this issue Jun 4, 2021 · 6 comments
Assignees
Labels
enhancement New feature or request

Comments

@che85
Copy link
Member

che85 commented Jun 4, 2021

In order to use the trained model (custom code - independent of MONAI).

For now, we will only be porting one FCN input configuration. I need to do the following:

  • decouple the .pth model from pickled source code. Only the state_dict will be required for restoring learnable parameters (i.e. weights and biases) of the model
  • port custom code to MONAI based framework (e.g. transforms, supervised trainer, etc.) minimizing custom code use
  • do a training run with the same data and check the progress
  • load "known to be working" state_dict and run inference on our 15 in-house testing datasets
  • move working solution into MONALabel infrastructure

@mattjolley Which input configuration do you think would be best for this? I would keep it simple (for example mid-systolic frame with annulus input and maybe commissural points?)

@che85 che85 self-assigned this Jun 4, 2021
@che85 che85 added the enhancement New feature or request label Jun 4, 2021
@mattjolley
Copy link

mattjolley commented Jun 4, 2021

@che85

I would implement two if you can:

-Mid-systolic frame with annulus input
-Mid-systolic frame with annulus and commissures

Those are the most useful in production.

@che85
Copy link
Member Author

che85 commented Jun 18, 2021

Everyone,
I have been working on port most of our custom code.

While porting the code and running MONAI-based tests, I found that it was not performing quite as well as our custom training loop implementation and I am still trying to figure out what differences between MONAI-based training and custom code could cause this.

For now, I only added a configuration for Mid-systolic frame with annulus input.

Training (data)

Total number of datasets: 148
Volume dimensions: 224 x 224 x 224
Voxel spacing: 0.25 or finer (depending on ground-truth average height of valve in voxels with a minimum height set to 6 voxels)

Number of datasets (training): 120
Number of datasets (validation): 13
Number of datasets (testing): 15
Use of mixed precision training (https://github.com/NVIDIA/apex)
RAdam optimizer (https://github.com/LiyuanLucasLiu/RAdam)
The very first V-Net we used was based on a V-Net introduced by NVIDIA at a Deep Learning workshop at the RSNA in 2018.

V-Net input data:

  • mid-systolic phase frame of the 3D-Echo sequence
  • user-annotated annulus contour curve

Custom Code

link: https://github.com/JolleyLabCHOP/DeepHeart/tree/main/custom_code
based on https://github.com/victoresque/pytorch-template

Everything you would need to know for the training process is collected in the following configuration file: https://github.com/JolleyLabCHOP/DeepHeart/blob/main/custom_code/config.json

Tensorboard output (03/2020)

From our old Tensorboard outputs (luckily we saved them), you can see that it's converging quickly and we are getting good results within a few hours. Training at the time was based on CUDA 10 and pytorch 1.4.

image

Tensorboard visualizations (coronal view )

(top row: ground-truth segmentation, middle row: mid-systolic-image, and distance transformed annulus, bottom row: predicted segmentation)
Note: slices are not synchronized between the displayed rows
Tensorboard_custom

Tensorboard output (06/2021)

We decided to run custom code again with the new CUDA 11 and pytorch 1.8 and it still seems the be working very well. One major difference is the increased training process time (2.5x).

image

MONAI

Most of our code is ported to MONAI.

A few additional source files are provided next to the main Jupyter notebook.

link: https://github.com/JolleyLabCHOP/DeepHeart/blob/main/MONAI/MONAI_based_training.ipynb

Tensorboard output (06/2021)

I didn't invest much time in creating visualizations in Tensorboard as we had implemented it initially in our custom code but I am pretty sure it's straightforward to do that.

image

I will probably give it another try to let it run longer over the weekend.

It might just be something really simple that I don't see.

We could probably implement the MONAILabel app with our custom code. My only concern would be the huge overhead of our custom code.

MONAI seems great and I am looking forward to learning more and working with you guys to make this work.

@che85
Copy link
Member Author

che85 commented Jun 22, 2021

I have made some changes to the MONAI based training (see a1d5b47) and now it's performing as I would expect it to with a validation mean dice ~ 0.76 for using mid-systolic-image + annulus input.

image

I think if we can implement this as a start in MONAILabel it would be perfect.

We should talk about the MONAILabel custom app UI. For this use case, we will need the annulus curve as an input. When segmenting the heart, our research assistant(s) create the annulus contour by rotating the volume 15 degrees for every click until a full rotation was achieved. The resulting curve is interpolated and saved as a vtk model.

@mattjolley
Copy link

Great work @che85

I hope you can implement this during slicerweek and iterate from there.

@SachidanandAlle
Copy link

SachidanandAlle commented Jun 22, 2021

https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/deepedit_left_atrium

This can be reference for creating sample app for monailabel..

Infer task can be abstracted here.. through simple pre/post transforms:
https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/deepedit_left_atrium/lib/infer.py

Train task can be abstracted here..
https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/deepedit_left_atrium/lib/train.py

Init for these infer/train tasks can happen in:
https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/deepedit_left_atrium/main.py

Once you have the app ready:
https://github.com/Project-MONAI/MONAILabel#installation

monailabel start_server --app /workspace/apps/deepedit_left_atrium --studies /workspace/datasets/Task02_Heart/imagesTr

In case if you are expecting additonal inputs through 3D Slicer, simple way is to extend this plugin:
https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/slicer

All the rest APIs can be seen at:
http://127.0.0.1:8000/#/

For example if you are looking to pass additional info to infer, you can pass them in json params:
http://127.0.0.1:8000/#/Infer/run_inference_infer__model__post
(Example deepgrow passes foreground/background points captured through mouse clicks through Fiducial)

If you need to pass binary (like label etc.. as input you can pass it as well in the infer api; but i guess you are looking for passing additonal points/data through params captured through annulus curve as mentioned in above comment)

@diazandr3s
Copy link

diazandr3s commented Jun 25, 2021

I have made some changes to the MONAI based training (see a1d5b47) and now it's performing as I would expect it to with a validation mean dice ~ 0.76 for using mid-systolic-image + annulus input.

image

I think if we can implement this as a start in MONAILabel it would be perfect.

We should talk about the MONAILabel custom app UI. For this use case, we will need the annulus curve as an input. When segmenting the heart, our research assistant(s) create the annulus contour by rotating the volume 15 degrees for every click until a full rotation was achieved. The resulting curve is interpolated and saved as a vtk model.

Happy to help with the implementation in MONAI Label and further improvement of this App

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants