diff --git a/notebooks/Synergistic/MR-Joint-TV.ipynb b/notebooks/Synergistic/MR-Joint-TV.ipynb deleted file mode 100755 index c3370631..00000000 --- a/notebooks/Synergistic/MR-Joint-TV.ipynb +++ /dev/null @@ -1,490 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Demonstration of synergistic MR reconstruction with CCP PET-MR Software\n", - "\n", - "This demonstration shows how to implement a joint-TV approach for reconstructing T1 and T2 data." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Work in progress\n", - "This notebook currently does not yet do what it's supposed to do. We use the `L2NormSquared` CIL operator below, but that\n", - "computed the L2 norm of the whole image, while it needs to be done voxel-wise. Therefore, the code below does *not* compute (joint) TV.\n", - "\n", - "In addition, this notebook uses data this is not publically available yet." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "First version: 6th of November 2019\n", - "Author: Kris Thielemans, Christoph Kolbitsch, Johannes Mayer\n", - "\n", - "CCP PETMR Synergistic Image Reconstruction Framework (SIRF). \n", - "Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC. \n", - "Copyright 2015 - 2019 University College London. \n", - "Copyright 2015 - 2019 Physikalisch-Technische Bundesanstalt.\n", - "\n", - "This is software developed for the Collaborative Computational\n", - "Project in Positron Emission Tomography and Magnetic Resonance imaging\n", - "(http://www.ccppetmr.ac.uk/).\n", - "\n", - "Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "you may not use this file except in compliance with the License.\n", - "You may obtain a copy of the License at\n", - " http://www.apache.org/licenses/LICENSE-2.0\n", - "Unless required by applicable law or agreed to in writing, software\n", - "distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "See the License for the specific language governing permissions and\n", - "limitations under the License." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#%% make sure figures appears inline and animations works\n", - "%matplotlib notebook\n", - "\n", - "# Setup the working directory for the notebook\n", - "import notebook_setup\n", - "from sirf_exercises import cd_to_working_dir\n", - "cd_to_working_dir('Synergistic', 'MR_joint_tv')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "__version__ = '0.1.0'\n", - "\n", - "# import engine module\n", - "import sirf.Gadgetron as pMR\n", - "from sirf.Utilities import examples_data_path\n", - "\n", - "# import further modules\n", - "import os, numpy\n", - "\n", - "import matplotlib.pyplot as plt\n", - "import matplotlib.animation as animation" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# This is just an auxiliary function\n", - "def norm_array( arr ):\n", - " min_a = abs(arr).min()\n", - " max_a = abs(arr).max()\n", - " \n", - " return (arr - min_a)/(max_a - min_a)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Find MR data with multiple sequences (*note: this data is currently not available yet)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "os.chdir('johannes')\n", - "ls" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "fsT1=pMR.AcquisitionData('Simul_exhale_CV_nav_cart_128Cube_FLASH_T1.h5')\n", - "fsT2=pMR.AcquisitionData('Simul_exhale_CV_nav_cart_128Cube_FLASH_T2.h5')\n", - "usT1=pMR.AcquisitionData('Simul_exhale_CV_nav_cart_128Cube_GRAPPA4_REF48_FLASH_T1.h5')\n", - "usT2=pMR.AcquisitionData('Simul_exhale_CV_nav_cart_128Cube_GRAPPA4_REF48_FLASH_T2.h5')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Undersampled Reconstruction\n", - "#### Goals of this notebook:\n", - "\n", - "- implement a joint-TV approach for undersampled MR reconstruction where a fully-sampled image (e.g. T1) is available\n", - "- extend this to a joint-TV synergistic approach when both are undersampled\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Read in fully sampled T1 data\n", - "fully_acq_data = fsT1\n", - "prep_full_data = pMR.preprocess_acquisition_data(fully_acq_data)\n", - "recon=pMR.FullySampledReconstructor()\n", - "recon.set_input(prep_full_data)\n", - "recon.process()\n", - "fs_T1image=recon.get_output()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# display\n", - "fs_image_array = fs_T1image.as_array()\n", - "fs_image_array = norm_array(fs_image_array)\n", - "\n", - "fig = plt.figure()\n", - "plt.set_cmap('gray')\n", - "ax = fig.add_subplot(1,1,1)\n", - "ax.imshow( abs(fs_image_array[64,:,:]), vmin=0, vmax=1)\n", - "ax.set_title('Fully sampled reconstruction')\n", - "ax.axis('off')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# LOADING AND PREPROCESSING DATA FOR THE T1 undersampled data\n", - "# check later of this works with the T2 data! \n", - "acq_data = usT1\n", - "preprocessed_data = pMR.preprocess_acquisition_data(acq_data)\n", - "preprocessed_data.sort()\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#%% RETRIEVE K-SPACE DATA\n", - "k_array = preprocessed_data.as_array()\n", - "print('Size of k-space %dx%dx%d' % k_array.shape)\n", - "num_channels=k_array.shape[1]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# optionally add some noise\n", - "scale=numpy.linalg.norm(k_array)/k_array.size*100\n", - "k_array = k_array + scale*(numpy.random.randn(k_array.shape[0],k_array.shape[1],k_array.shape[2]) + 1j*numpy.random.randn(k_array.shape[0],k_array.shape[1],k_array.shape[2]))\n", - "preprocessed_data.fill(k_array)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Coil Sensitivity Map computation" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "csm = pMR.CoilSensitivityData()\n", - "csm.smoothness = 80\n", - "csm.calculate(preprocessed_data)\n", - "csm_array = numpy.squeeze(csm.as_array(0))\n", - "\n", - "csm_array = csm_array.transpose([1,0,2,3])\n", - "\n", - "fig = plt.figure()\n", - "plt.set_cmap('jet')\n", - "for c in range(csm_array.shape[1]):\n", - " ax = fig.add_subplot(2,num_channels/2,c+1)\n", - " ax.imshow(abs(csm_array[64,c,:,:]))\n", - " ax.set_title('Coil '+str(c+1))\n", - " ax.axis('off')\n", - "plt.set_cmap('gray')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### AcquisitionModel" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# NOW WE GENERATE THE ACQUISITION MODEL\n", - "E = pMR.AcquisitionModel(preprocessed_data, fs_T1image)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# to supply coil info to the acquisition model we use the dedicated method\n", - "E.set_coil_sensitivity_maps(csm)\n", - "\n", - "# Now we can hop back from k-space into image space in just one line:\n", - "aq_model_image = E.backward( preprocessed_data )" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "aq_model_image_array = norm_array(aq_model_image.as_array())\n", - "\n", - "fig = plt.figure()\n", - "plt.set_cmap('gray')\n", - "ax = fig.add_subplot(1,1,1)\n", - "ax.imshow(abs(aq_model_image_array[64,:,:]))\n", - "ax.set_title('Result Backward Method of E ')\n", - "ax.axis('off')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Image Reconstruction as an Inverse Problem\n", - "## Iterative Parallel Imaging Reconstruction\n", - "\n", - "\n", - "The task of image reconstruction boils down to optimizing the following function:\n", - "$$ \\mathcal{C}(x) = \\frac{1}{2} \\bigl{|} \\bigl{|} E \\, x - y \\bigr{|} \\bigr{|}_2^2 + \\beta R(x, x_{side})\\\\\n", - "\\tilde{x} = \\min_x \\mathcal{C}(x)\n", - "$$\n", - "where $x_{side}$ is an image that acts as side-information, and $R$ is a penalty that encodes structural similarity.\n", - "\n", - "Here we will use a joint-TV type of prior, where\n", - "$$R(x,x_{side}) = \\sqrt{ |\\nabla x|^2 + |\\nabla x_{side}|^2 + \\eta}$$" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Define the gradient operations using CIL\n", - "We will define an operator that computes the square of the l2-norm of the image gradient:\n", - "$$ qp(x) = |\\nabla x|^2$$\n", - "\n", - "We will use the `L2NormSquared` operator for $|.|^2$\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from ccpi.optimisation.operators import GradientSIRF\n", - "from ccpi.optimisation.functions import L2NormSquared, FunctionOperatorComposition\n", - "\n", - "# currently ned to work around a small CIL bug for SIRF ImageData. Please ignore!\n", - "def new_gradient(self,x):\n", - " tmp= self.operator.direct(x)\n", - " tmp = self.function.gradient(tmp)\n", - " grad_func = self.operator.adjoint(tmp)\n", - " return grad_func\n", - "setattr(FunctionOperatorComposition, 'gradient', new_gradient)\n", - "\n", - "# operator for the square of the l2 norm\n", - "l2Squared=L2NormSquared()\n", - "# set up operator for the image gradient\n", - "imageGradient=GradientSIRF(fs_T1image)\n", - "\n", - "qp = FunctionOperatorComposition(l2Squared,imageGradient)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## objective function\n", - "we will first precompute the term with $x_{side}$" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "xside=fs_T1image\n", - "sq_norm_side=qp(xside)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can write the total objective function in terms of the above operators. Check if we have the gradient calculation correct!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def obj_fun(y,E,x,beta,sq_norm_side):\n", - " return l2Squared(E.forward(x)-y)/2 + beta*numpy.sqrt(qp(x)+sq_norm_side+.1)\n", - "\n", - "def obj_fun_gradient(y,E,x,beta,sq_norm_side):\n", - " return E.backward(E.forward(x)-y) + beta*qp.gradient(x)/numpy.sqrt(qp(x)+sq_norm_side+.1)/2" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "set up the data for the reconstruction. We will initialise by just using a inverse FFT and sum the data from the coils" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "y = preprocessed_data\n", - "init_image = E.backward(y)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now do a very simple gradient descent implementation (with fixed step-size)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "step_size = .1\n", - "beta=1\n", - "num_iters = 5\n", - "current_image = init_image.clone()\n", - "obj_fun_values = [obj_fun(y,E,current_image,beta,add)]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "for k in range(num_iters):\n", - " current_image = current_image - step_size * obj_fun_gradient(y,E,current_image,beta,sq_norm_side)\n", - " print('iter '+str(k))\n", - " # maybe save objective function values (but it will slow us down)\n", - " #current_obj_fun_value = obj_fun(y,E,current_image,beta,sq_norm_side)\n", - " #obj_fun_values.append(current_obj_fun_value)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "plt.figure()\n", - "plt.imshow(abs(current_image.as_array()[64,:,:]))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "plot difference image to see if we actually improved our initial estimate" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "plt.figure()\n", - "plt.imshow(abs(current_image.as_array()[64,:,:] - init_image.as_array()[64,:,:]))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "plot objective function values (if you saved them)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "plt.figure()\n", - "plt.plot(obj_fun_values)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 2", - "language": "python", - "name": "python2" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython" - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -}