Skip to content
/ IB-GAN Public

Code for paper IB-GAN: A modality conversion approach to MV-DRs and KV-DRRs registration using information bottlenecked conditional generative adversarial network

Notifications You must be signed in to change notification settings

lc82111/IB-GAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IB-cGAN

Code for paper:

A modality conversion approach to MV-DRs and KV-DRRs registration using information bottlenecked conditional generative adversarial network. CongLiu et.al.

Abstract

PURPOSE: As affordable equipment, electronic portal imaging devices (EPIDs) are wildly used in radiation therapy departments to verify patients' positions for accurate radiotherapy. However, these devices tend to produce visually ambiguous and low-contrast planar digital radiographs under megavoltage x ray (MV-DRs), which poses a tremendous challenge for clinicians to perform multimodal registration between the MV-DRs and the kilovoltage digital reconstructed radiographs (KV-DRRs) developed from the planning computed tomography. Furthermore, the existent of strong appearance variations also makes accurate registration beyond the reach of current automatic algorithms.

METHODS: We propose a novel modality conversion approach to this task that first synthesizes KV images from MV-DRs, and then registers the synthesized and real KV-DRRs. We focus on the synthesis technique and develop a conditional generative adversarial network with information bottleneck extension (IB-cGAN) that takes MV-DRs and nonaligned KV-DRRs as inputs and outputs synthesized KV images. IB-cGAN is designed to address two main challenges in deep-learning-based synthesis: (a) training with a roughly aligned dataset suffering from noisy correspondences; (b) making synthesized images have real clinical meanings that faithfully reflects MV-DRs rather than nonaligned KV-DRRs. Accordingly, IB-cGAN employs (a) an adversarial loss to provide training supervision at semantic level rather than the imprecise pixel level; (b) an IB to constrain the information from the nonaligned KV-DRRs.

RESULTS: We collected 2698 patient scans to train the model and 208 scans to test its performance. The qualitative results demonstrate realistic KV images can be synthesized allowing clinicians to perform the visual registration. The quantitative results show it significantly outperforms current nonmodality conversion methods by 22.37% (P = 0.0401) in terms of registration accuracy.

CONCLUSIONS: The modality conversion approach facilitates the downstream MV-KV registration for both clinicians and off-the-shelf registration algorithms. With this approach, it is possible to benefit the developing countries where inexpensive EPIDs are widely used for the image-guided radiation therapy.

The code and model trained by pytorch will be released after acceptance.

We show some interesting videos below. The z is a latent representation vector with 8 elements.

  1. We manipulate all the elements of z from -1 to 1 simultaneously.

  1. We manipulate z for a specific patient.

a) Varying all the elements of z from -1 to 1 simultaneously for a specific patient.

b) Varying the 3rd element of z from -1 to 1, and set all other elements to 0.

c) Varying the 4-th element of z from -1 to 1, and set all other elements to 0.

d) Varying the 5-th element of z from -1 to 1, and set all other elements to 0.

e) Varying the 7-th element of z from -1 to 1, and set all other elements to 0.

f) Varying the 8-th element of z from -1 to 1, and set all other elements to 0.

About

Code for paper IB-GAN: A modality conversion approach to MV-DRs and KV-DRRs registration using information bottlenecked conditional generative adversarial network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published