-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MONAILabel App Exploration #1
Comments
I would implement two if you can: -Mid-systolic frame with annulus input Those are the most useful in production. |
Everyone, While porting the code and running MONAI-based tests, I found that it was not performing quite as well as our custom training loop implementation and I am still trying to figure out what differences between MONAI-based training and custom code could cause this. For now, I only added a configuration for Training (data)Total number of datasets: 148 Number of datasets (training): 120 V-Net input data:
Custom Codelink: https://github.com/JolleyLabCHOP/DeepHeart/tree/main/custom_code Everything you would need to know for the training process is collected in the following configuration file: https://github.com/JolleyLabCHOP/DeepHeart/blob/main/custom_code/config.json Tensorboard output (03/2020)From our old Tensorboard outputs (luckily we saved them), you can see that it's converging quickly and we are getting good results within a few hours. Training at the time was based on CUDA 10 and pytorch 1.4. Tensorboard visualizations (coronal view )(top row: ground-truth segmentation, middle row: mid-systolic-image, and distance transformed annulus, bottom row: predicted segmentation) Tensorboard output (06/2021)We decided to run custom code again with the new CUDA 11 and pytorch 1.8 and it still seems the be working very well. One major difference is the increased training process time (2.5x). MONAIMost of our code is ported to MONAI. A few additional source files are provided next to the main Jupyter notebook. link: https://github.com/JolleyLabCHOP/DeepHeart/blob/main/MONAI/MONAI_based_training.ipynb Tensorboard output (06/2021)I didn't invest much time in creating visualizations in Tensorboard as we had implemented it initially in our custom code but I am pretty sure it's straightforward to do that. I will probably give it another try to let it run longer over the weekend. It might just be something really simple that I don't see. We could probably implement the MONAILabel app with our custom code. My only concern would be the huge overhead of our custom code. MONAI seems great and I am looking forward to learning more and working with you guys to make this work. |
I have made some changes to the MONAI based training (see a1d5b47) and now it's performing as I would expect it to with a validation mean dice ~ 0.76 for using mid-systolic-image + annulus input. I think if we can implement this as a start in MONAILabel it would be perfect. We should talk about the MONAILabel custom app UI. For this use case, we will need the annulus curve as an input. When segmenting the heart, our research assistant(s) create the annulus contour by rotating the volume 15 degrees for every click until a full rotation was achieved. The resulting curve is interpolated and saved as a vtk model. |
Great work @che85 I hope you can implement this during slicerweek and iterate from there. |
https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/deepedit_left_atrium This can be reference for creating sample app for monailabel.. Infer task can be abstracted here.. through simple pre/post transforms: Train task can be abstracted here.. Init for these infer/train tasks can happen in: Once you have the app ready:
In case if you are expecting additonal inputs through 3D Slicer, simple way is to extend this plugin: All the rest APIs can be seen at: For example if you are looking to pass additional info to infer, you can pass them in json params: If you need to pass binary (like label etc.. as input you can pass it as well in the infer api; but i guess you are looking for passing additonal points/data through params captured through annulus curve as mentioned in above comment) |
Happy to help with the implementation in MONAI Label and further improvement of this App |
- currently no training is supported ref: Project-MONAI/MONAILabel#154
In order to use the trained model (custom code - independent of MONAI).
For now, we will only be porting one FCN input configuration. I need to do the following:
@mattjolley Which input configuration do you think would be best for this? I would keep it simple (for example mid-systolic frame with annulus input and maybe commissural points?)
The text was updated successfully, but these errors were encountered: