Welcome to the official PyTorch implementation of our groundbreaking research paper:
Unsupervised Domain Adaptation of Object Detection in Axial CT Images of Lumbar Vertebrae
LVDAN leverages advanced unsupervised domain adaptation techniques to enhance object detection performance in axial CT images of lumbar vertebrae. This model is designed to improve accuracy and robustness in medical imaging applications, addressing the challenges posed by domain shifts in data.
- High Accuracy: Achieve superior detection rates in challenging medical imaging scenarios.
- Robustness: Effectively adapts to variations in image quality and acquisition conditions.
- User-Friendly: Simplified installation and training processes for seamless integration into your workflow.
To get started, clone the repository and install the required dependencies in a Python environment (version >= 3.7) with PyTorch (version >= 1.13.1).
-
Create and activate a virtual environment:
conda create -n yolo python=3.7 conda activate yolo
-
Install the required packages:
pip install ultralytics
-
Download the pretrained YOLOv8x model:
Follow these steps to set up the DA-training environment:
-
Clone the repository:
git clone https://github.com/ElzatElham/LVDAN.git
-
Create and activate a new virtual environment:
conda create -n LVDAN python=3.7 conda activate LVDAN
-
Install the required packages:
pip install -r requirements.txt
-
Install LVDAN:
python setup.py install
To train and test the model, use the following commands:
python Training.py
python Testing.py
All training and testing parameters can be found within Training.py
and Testing.py
.
Additionally, the dataset used in the paper is open-sourced and can be accessed here: CTLV-DAOD.
By following these instructions, you can effectively utilize the LVDAN model for enhanced object detection in axial CT images, paving the way for improved diagnostic capabilities in medical imaging.