-
Hello! I trained a DeepEdit model from scratch on a custom dataset. Now, I would like to execute the inference over the test images. I tried this by using { The model and the images are found though by executing When I try to run inference in Slicer, there is an error "Failed to run inference in MONAI Label Server. Message:: Status 500; Response: Internal Server Error". I hope that somebody can help me, thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Solved it. I had to change the config in /radiology/lib/configs to add the path to my custom trained model. |
Beta Was this translation helpful? Give feedback.
-
Hi @annkapopp, Glad to see you manage to make it work. Another way of doing batch inference is by running the main file in the radiology app folder: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/main.py#L267 Just provide these arguments: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/main.py#L295-L297 This will run MONAI Label without actually staring the server, just as any other python script. Hope this helps, |
Beta Was this translation helpful? Give feedback.
Solved it. I had to change the config in /radiology/lib/configs to add the path to my custom trained model.