-
Notifications
You must be signed in to change notification settings - Fork 21
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
listmode DL notebooks for PSMR 2024 (#225)
* init commit of first LM DL PET notebooks * rename notebook * ignore all notebooks (auto generated from jupytext) * update README * add notebook file and learning objectives * add sirf wsl build help * update help * add first figure * work on intro * work on intro * work on intro * move solution into separate file * add array notebook * add LM recon mini example * notebooks/Deep_Learning_listmode_PET/01_SIRF_listmode_recon.py * work on LM recon notebook * format notebook * move solutions to snippets * use inverse sens images in updates * correct LM OSEM solution * add test script for hessian in sinogram mode * cosmetics * add manual Hessian test * add LM Hessian test * use acq storage memory * add skeleton for 3rd notebook * work on 3rd notebook * work on 3rd notebook * add custom layer notebook * structure notebooks * structure notebooks * work on notebook 4 * add 04/05 notebooks * WIP * add TODO * add new figures * add new figure * wip * update plots * black reformat * update figure * clean up * clean up * clean up * update TODO * WIP to get 60min recons * use 60min ref recon * clean up * clean up * reformat * add ipynb versions * fix typos
- Loading branch information
Showing
42 changed files
with
6,771 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
*.ipynb | ||
lm_recons/* | ||
recons/* | ||
recons_1min/* | ||
recons_60min/* |
226 changes: 226 additions & 0 deletions
226
notebooks/Deep_Learning_listmode_PET/00_introduction.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,226 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"id": "62b3400b", | ||
"metadata": {}, | ||
"source": [ | ||
"Introduction & Motivation\n", | ||
"=========================\n", | ||
"\n", | ||
"In this series of SIRF exercises, we will learn how to build and train a deep neural\n", | ||
"network for listmode PET reconstruction. As a concrete example,\n", | ||
"we will focus on unrolled Variational networks that can be trained in a supervised manner.\n", | ||
"The general architecture of such network is shown below.\n", | ||
"\n", | ||
"![](figs/osem_varnet.drawio.svg)\n", | ||
"\n", | ||
"The aim of an unrolled variational PET listmode network is to create \"high quality\" PET reconstructions\n", | ||
"from \"low-quality\" input listmode data using supervised training." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "9338c20c", | ||
"metadata": {}, | ||
"source": [ | ||
"Question\n", | ||
"--------\n", | ||
"\n", | ||
"Which (realistic) circumstances can lead to \"low-quality\" PET listmode data?\n", | ||
"How can we obtain paired \"high-quality\" PET reconstructions needed for supervised training?" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "dccdc765", | ||
"metadata": {}, | ||
"source": [ | ||
"Learning objectives of this notebook\n", | ||
"------------------------------------\n", | ||
"\n", | ||
"1. What is listmode PET reconstruction and why is it attractive for combining DL and reconstruction.\n", | ||
"2. Understanding architectures of unrolled reconstruction networks.\n", | ||
"3. Understanding the essential blocks of training a neural network in pytorch (model setup, data loading, gradient backpropagation)\n", | ||
" and what we are missing from pytorch to build an unrolled PET reconstruction network." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "3ad534e9", | ||
"metadata": { | ||
"lines_to_next_cell": 2 | ||
}, | ||
"source": [ | ||
"What is listmode PET reconstruction? Why is it attractive for combining DL and reconstruction?\n", | ||
"----------------------------------------------------------------------------------------------\n", | ||
"\n", | ||
"In listmode PET reconstruction, the emission data is stored in a list of events. Each event contains\n", | ||
"the detector numbers of the two detectors that detected the photon pair, and eventually also the\n", | ||
"arrival time difference between the two photons (time-of-flight or TOF).\n", | ||
"\n", | ||
"In contrast to histogrammed emission data (singoram mode), reconstruction of listmode data has the following advantages:\n", | ||
"1. For low and normal count acquisitions with modern TOF scanners, forward and back projections in listmode are usually faster\n", | ||
" compared to projections in sinogram mode. **Question: Why?**\n", | ||
"2. Storage of (low count) listmode data requires less memory compared to storing full TOF sinograms. **Question: Why?**\n", | ||
"3. Listmode data also preserves the timing information of the detected photon pairs." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "bdcd2c5e", | ||
"metadata": {}, | ||
"source": [ | ||
"Architecture of unrolled reconstruction networks\n", | ||
"------------------------------------------------\n", | ||
"\n", | ||
"Unrolled variational networks are a class of deep neural networks that are designed to solve inverse problems.\n", | ||
"The consist of a series of layers that are repeated multiple times.\n", | ||
"Each contains an update with respect to the data fidelity term (blue boxes in the figure above)\n", | ||
"and a regularization term (red boxes in the figure above).\n", | ||
"The latter can be represented by a neural network (e.g. a CNN) containing learnable parameters which are optimized\n", | ||
"during (supervised) training.\n", | ||
"\n", | ||
"There are many way of implementing the data fidelity update block.\n", | ||
"One simple possibility is to implement a gradient ascent step with respect to the Poisson log-likelihood.\n", | ||
"$$ x^+ = x_k + \\alpha \\nabla_x \\log L(y|x) ,$$\n", | ||
"where the Poisson log-likelihood is given by\n", | ||
"$$ \\log L(y|x) = \\sum_{i} y_i \\log(\\bar{y}_i(x)) - \\bar{y}_i(x) ,$$\n", | ||
"where $y$ is the measured emission sinogram, and $\\bar{y}(x) = Ax + s$ the expectation of the measured data given the current\n", | ||
"estimate of the image $x$ and a linear (affine) forward model $A$ including the mean of known additive contaminations (randoms and scatter) $s$.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "c069ae51", | ||
"metadata": {}, | ||
"source": [ | ||
"Exercise 0.1\n", | ||
"------------\n", | ||
"\n", | ||
"Given the equations above, derive the update formula for the gradient of the Poisson log-likelihood (using sinogram data)\n", | ||
"\n", | ||
"(bonus question) How does the update formula change if we use listmode data instead of sinogram data?\n", | ||
"\n", | ||
"YOUR SOLUTION GOES IN HERE" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "dc9dff44", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# #TO SHOW THE SOLUTION, UNCOMMENT THE NEXT TO LINES AND RUN THE CELL\n", | ||
"# from IPython.display import Markdown, display\n", | ||
"# display(Markdown(\"snippets/solution_0_1.md\"))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "061b92da", | ||
"metadata": { | ||
"lines_to_next_cell": 2 | ||
}, | ||
"source": [ | ||
"Training a neural network in pytorch\n", | ||
"------------------------------------\n", | ||
"\n", | ||
"Pytorch is a popular deep learning framework that provides a flexible and efficient way to build and train neural networks.\n", | ||
"The essential steps to train a neural network in pytorch are summarized in the train loop, see\n", | ||
"[here](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html#optimizing-the-model-parameters) for more details." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "84f00b91", | ||
"metadata": { | ||
"lines_to_next_cell": 2 | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"# DO NOT RUN THIS CELL - CODE SNIPPET ONLY\n", | ||
"import torch\n", | ||
"\n", | ||
"\n", | ||
"def train(\n", | ||
" dataloader: torch.utils.data.DataLoader,\n", | ||
" model: torch.nn.Module,\n", | ||
" loss_fn: torch.nn.Module,\n", | ||
" optimizer: torch.optim.Optimizer,\n", | ||
" device: torch.device,\n", | ||
"):\n", | ||
" model.train()\n", | ||
" # loop over the dataset and sample mini-batches\n", | ||
" for batch_num, (input_data_batch, target_image_batch) in enumerate(dataloader):\n", | ||
" # move input and target data to device\n", | ||
" input_data_batch = input_data_batch.to(device)\n", | ||
" target_image_batch = target_image_batch.to(device)\n", | ||
"\n", | ||
" # Compute prediction error\n", | ||
" predicted_image_batch = model(input_data_batch)\n", | ||
" loss = loss_fn(predicted_image_batch, target_image_batch)\n", | ||
"\n", | ||
" # calculate gradients using backpropagation\n", | ||
" loss.backward()\n", | ||
" # update model parameters\n", | ||
" optimizer.step()\n", | ||
" # reset gradients\n", | ||
" optimizer.zero_grad()\n", | ||
"\n", | ||
"\n", | ||
"# model and data loader to be defined\n", | ||
"my_model = myModel()\n", | ||
"my_data_loader = myDataLoader()\n", | ||
"\n", | ||
"# compute device - use cuda GPU if available\n", | ||
"dev = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", | ||
"# the loss function we optimize during training\n", | ||
"my_loss_fn = torch.nn.MSELoss()\n", | ||
"# the optimizer we use to update the model parameters\n", | ||
"my_optimizer = torch.optim.Adam(my_model.parameters(), lr=1e-3)\n", | ||
"\n", | ||
"# run a single epoch of training\n", | ||
"train(my_data_loader, my_model, my_loss_fn, my_optimizer, dev)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "c125902e", | ||
"metadata": {}, | ||
"source": [ | ||
"**The essential blocks for supervised training a neural network in pytorch are:**\n", | ||
"1. Sampling of mini-batches of input and target (label) images from the training dataset.\n", | ||
"2. Forward pass: Compute the prediction of the model given the input data.\n", | ||
"3. Compute the loss (error) between the prediction and the target images.\n", | ||
"4. Backward pass: Compute the gradient of the loss with respect to the model parameters using backpropagation.\n", | ||
"5. Update the model parameters using an optimizer.\n", | ||
"\n", | ||
"Fortunately, pytorch provides many high-level functions that simplify the implementation of all these steps.\n", | ||
"(e.g. pytorch's data loader classes, pytorch's convolutional layers and non-linear activation function, pytorch's\n", | ||
"autograd functionality for backpropagation of gradients, and optimizers like Adam)\n", | ||
"To train a listmode PET unrolled variational network, the only thing we need to implement ourselves\n", | ||
"is the forward pass of our model, including the data fidelity update blocks which are not directly available pytorch.\n", | ||
"\n", | ||
"**The aim of the remaining exercises is:**\n", | ||
"- to learn how to couple SIRF/STIR's PET listmode classes into a pytorch feedforward model\n", | ||
"- learn how to backpropagate gradients through our custom model\n", | ||
"\n", | ||
"**The following is beyond the scope of the exercises:**\n", | ||
"- training a real world unrolled variational listmode PET reconstruction network on a\n", | ||
" big amount of data" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"jupytext": { | ||
"cell_metadata_filter": "-all", | ||
"main_language": "python", | ||
"notebook_metadata_filter": "-all" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
156 changes: 156 additions & 0 deletions
156
notebooks/Deep_Learning_listmode_PET/00_introduction.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,156 @@ | ||
# %% [markdown] | ||
# Introduction & Motivation | ||
# ========================= | ||
# | ||
# In this series of SIRF exercises, we will learn how to build and train a deep neural | ||
# network for listmode PET reconstruction. As a concrete example, | ||
# we will focus on unrolled Variational networks that can be trained in a supervised manner. | ||
# The general architecture of such network is shown below. | ||
# | ||
# ![](figs/osem_varnet.drawio.svg) | ||
# | ||
# The aim of an unrolled variational PET listmode network is to create "high quality" PET reconstructions | ||
# from "low-quality" input listmode data using supervised training. | ||
|
||
# %% [markdown] | ||
# Question | ||
# -------- | ||
# | ||
# Which (realistic) circumstances can lead to "low-quality" PET listmode data? | ||
# How can we obtain paired "high-quality" PET reconstructions needed for supervised training? | ||
|
||
# %% [markdown] | ||
# Learning objectives of this notebook | ||
# ------------------------------------ | ||
# | ||
# 1. What is listmode PET reconstruction and why is it attractive for combining DL and reconstruction. | ||
# 2. Understanding architectures of unrolled reconstruction networks. | ||
# 3. Understanding the essential blocks of training a neural network in pytorch (model setup, data loading, gradient backpropagation) | ||
# and what we are missing from pytorch to build an unrolled PET reconstruction network. | ||
|
||
# %% [markdown] | ||
# What is listmode PET reconstruction? Why is it attractive for combining DL and reconstruction? | ||
# ---------------------------------------------------------------------------------------------- | ||
# | ||
# In listmode PET reconstruction, the emission data is stored in a list of events. Each event contains | ||
# the detector numbers of the two detectors that detected the photon pair, and eventually also the | ||
# arrival time difference between the two photons (time-of-flight or TOF). | ||
# | ||
# In contrast to histogrammed emission data (singoram mode), reconstruction of listmode data has the following advantages: | ||
# 1. For low and normal count acquisitions with modern TOF scanners, forward and back projections in listmode are usually faster | ||
# compared to projections in sinogram mode. **Question: Why?** | ||
# 2. Storage of (low count) listmode data requires less memory compared to storing full TOF sinograms. **Question: Why?** | ||
# 3. Listmode data also preserves the timing information of the detected photon pairs. | ||
|
||
|
||
# %% [markdown] | ||
# Architecture of unrolled reconstruction networks | ||
# ------------------------------------------------ | ||
# | ||
# Unrolled variational networks are a class of deep neural networks that are designed to solve inverse problems. | ||
# The consist of a series of layers that are repeated multiple times. | ||
# Each contains an update with respect to the data fidelity term (blue boxes in the figure above) | ||
# and a regularization term (red boxes in the figure above). | ||
# The latter can be represented by a neural network (e.g. a CNN) containing learnable parameters which are optimized | ||
# during (supervised) training. | ||
# | ||
# There are many way of implementing the data fidelity update block. | ||
# One simple possibility is to implement a gradient ascent step with respect to the Poisson log-likelihood. | ||
# $$ x^+ = x_k + \alpha \nabla_x \log L(y|x) ,$$ | ||
# where the Poisson log-likelihood is given by | ||
# $$ \log L(y|x) = \sum_{i} y_i \log(\bar{y}_i(x)) - \bar{y}_i(x) ,$$ | ||
# where $y$ is the measured emission sinogram, and $\bar{y}(x) = Ax + s$ the expectation of the measured data given the current | ||
# estimate of the image $x$ and a linear (affine) forward model $A$ including the mean of known additive contaminations (randoms and scatter) $s$. | ||
# | ||
|
||
# %% [markdown] | ||
# Exercise 0.1 | ||
# ------------ | ||
# | ||
# Given the equations above, derive the update formula for the gradient of the Poisson log-likelihood (using sinogram data) | ||
# | ||
# (bonus question) How does the update formula change if we use listmode data instead of sinogram data? | ||
|
||
# YOUR SOLUTION GOES IN HERE | ||
|
||
# %% | ||
# #TO SHOW THE SOLUTION, UNCOMMENT THE NEXT TO LINES AND RUN THE CELL | ||
# from IPython.display import Markdown, display | ||
# display(Markdown("snippets/solution_0_1.md")) | ||
|
||
# %% [markdown] | ||
# Training a neural network in pytorch | ||
# ------------------------------------ | ||
# | ||
# Pytorch is a popular deep learning framework that provides a flexible and efficient way to build and train neural networks. | ||
# The essential steps to train a neural network in pytorch are summarized in the train loop, see | ||
# [here](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html#optimizing-the-model-parameters) for more details. | ||
|
||
|
||
# %% | ||
# DO NOT RUN THIS CELL - CODE SNIPPET ONLY | ||
import torch | ||
|
||
|
||
def train( | ||
dataloader: torch.utils.data.DataLoader, | ||
model: torch.nn.Module, | ||
loss_fn: torch.nn.Module, | ||
optimizer: torch.optim.Optimizer, | ||
device: torch.device, | ||
): | ||
model.train() | ||
# loop over the dataset and sample mini-batches | ||
for batch_num, (input_data_batch, target_image_batch) in enumerate(dataloader): | ||
# move input and target data to device | ||
input_data_batch = input_data_batch.to(device) | ||
target_image_batch = target_image_batch.to(device) | ||
|
||
# Compute prediction error | ||
predicted_image_batch = model(input_data_batch) | ||
loss = loss_fn(predicted_image_batch, target_image_batch) | ||
|
||
# calculate gradients using backpropagation | ||
loss.backward() | ||
# update model parameters | ||
optimizer.step() | ||
# reset gradients | ||
optimizer.zero_grad() | ||
|
||
|
||
# model and data loader to be defined | ||
my_model = myModel() | ||
my_data_loader = myDataLoader() | ||
|
||
# compute device - use cuda GPU if available | ||
dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") | ||
# the loss function we optimize during training | ||
my_loss_fn = torch.nn.MSELoss() | ||
# the optimizer we use to update the model parameters | ||
my_optimizer = torch.optim.Adam(my_model.parameters(), lr=1e-3) | ||
|
||
# run a single epoch of training | ||
train(my_data_loader, my_model, my_loss_fn, my_optimizer, dev) | ||
|
||
|
||
# %% [markdown] | ||
# **The essential blocks for supervised training a neural network in pytorch are:** | ||
# 1. Sampling of mini-batches of input and target (label) images from the training dataset. | ||
# 2. Forward pass: Compute the prediction of the model given the input data. | ||
# 3. Compute the loss (error) between the prediction and the target images. | ||
# 4. Backward pass: Compute the gradient of the loss with respect to the model parameters using backpropagation. | ||
# 5. Update the model parameters using an optimizer. | ||
# | ||
# Fortunately, pytorch provides many high-level functions that simplify the implementation of all these steps. | ||
# (e.g. pytorch's data loader classes, pytorch's convolutional layers and non-linear activation function, pytorch's | ||
# autograd functionality for backpropagation of gradients, and optimizers like Adam) | ||
# To train a listmode PET unrolled variational network, the only thing we need to implement ourselves | ||
# is the forward pass of our model, including the data fidelity update blocks which are not directly available pytorch. | ||
# | ||
# **The aim of the remaining exercises is:** | ||
# - to learn how to couple SIRF/STIR's PET listmode classes into a pytorch feedforward model | ||
# - learn how to backpropagate gradients through our custom model | ||
# | ||
# **The following is beyond the scope of the exercises:** | ||
# - training a real world unrolled variational listmode PET reconstruction network on a | ||
# big amount of data |
Oops, something went wrong.