Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestions for MAPEM and reconstruct_measured_data #140

Merged
merged 2 commits into from
Jun 28, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 12 additions & 10 deletions notebooks/PET/MAPEM.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@
"# Implement MAP-EM!\n",
"Actually, we will implement MAP-OSEM as that's easy to do.\n",
"\n",
"The lines below (almost) implement MAP-OSEM with a prior which just smooths along the horizontal direction. Either evaluate, and/or extend to 2D, and/or increase the speed of your Python code.\n",
"The lines below (almost) implement MAP-OSEM with a prior which just smooths along the horizontal direction.\n",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced that the current compute_xreg would actually be made much faster by tweaking the implementation - and the use of jit would make it unpredictable. I don't know if its a good idea to suggest that the participants try to optimise it.

Extending to 2D is a good suggestion, but its already below in the "what now?" section

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok!

"\n",
"Please note: this code was written to be simple. It is not terribly safe, very slow, and doesn't use best programming practices (it uses global variables, has no docstrings, no testing of validate of input, etc)."
]
Expand All @@ -263,8 +263,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## define weights as an array\n",
"Note that code further below assumes that this has 3 elements\n"
"## Define weights as an array\n",
"Wang&Qi define their algorithm for non-local weights, which means they use an algorithm to define the weighted smoothing of voxel $j$ from voxel $k$, $w_{jk}$. We will just implement a simpler local smoothing, which means weights are defined only for neighbours. This is equivalent of convolving with a kernel.\n",
"\n",
"The code further below assumes that we have a one-dimensional kernel of length 3 - the weights of which will correspond to smoothing in the previous voxel in the $x$ direction, smoothing with the current voxel ($i=j$), and smoothing with the voxel in the next $x$ direction. Let's just define a simple uniform kernel.\n"
ashgillman marked this conversation as resolved.
Show resolved Hide resolved
]
},
{
Expand All @@ -273,7 +275,7 @@
"metadata": {},
"outputs": [],
"source": [
"w=numpy.array([1.,0.,1.])\n",
"w=numpy.array([1.,1.,1.])\n",
ashgillman marked this conversation as resolved.
Show resolved Hide resolved
"# normalise to have sum 1\n",
"w/=w.sum()"
]
Expand All @@ -292,7 +294,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use the \"Just In Time\" compilation feature if `numba` here to speed-up the following function. (There are faster ways to write it, but this might be easier to understand)."
"We will use the \"Just In Time\" (jit) compilation feature of `numba` here to speed-up the following function."
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, I'm not sure that this actually is that slow. I think jit would have every reason to be able to optimise this well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

]
},
{
Expand Down Expand Up @@ -340,7 +342,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## define some functions for testing"
"## Define some functions for testing"
]
},
{
Expand Down Expand Up @@ -429,8 +431,8 @@
"### experiment with setting beta\n",
"what happens if you increase/decreate beta?\n",
"\n",
"### optimise the `xreg` code\n",
"generalise it to 2D, make it faster\n",
"### generalise the `xreg` code\n",
"generalise it to 2D (or even 3D)\n",
"### check what subsets do\n",
"increase the number of subsets and see if the algorithm breaks down, or will it do a limit cycle like OSEM?\n",
"### compare with another algorithm solving the same penalised reconstruction\n",
Expand All @@ -440,9 +442,9 @@
" - [user_osmaposl.py](https://github.com/SyneRBI/SIRF/blob/v3.1.0/examples/Python/PET/user_osmaposl.py) provides a basic implementation of the OSL algorithm.\n",
"- The relaxed version of the Ordered Subsets Separable Paraboloidal Surrogate (OS-SPS with \"precomputed denominator\") from Ahn and Fessler (`OSSPSReconstructor`). This is a convergent subset algorithm for any prior that allows constructing a separable quadratic surrogate (see above). It is illustrated in [ossps_reconstruction.py](https://github.com/SyneRBI/SIRF/blob/v3.1.0/examples/Python/PET/ossps_reconstruction.py). SIRF 3.1 only provides the STIR wrapper, and does not yet include a SIRF implementation yet. Therefore it can currently only be used with `sirf.STIR` penalties.\n",
"\n",
"Of course, given the objective function, it is possible to use other optimisation algorithms, suchas those provided in SciPy and CIL. Remember that you will likely need to use a constrained optimisation algorithm, taking the positivity into account. (Even when doing that, some generic algorithm will fail if there is no background term in the acquisition model, as the likelihood can become ill-defined (i.e. infinity) due to the presence of the logarithm)\n",
"Of course, given the objective function, it is possible to use other optimisation algorithms, such as those provided in SciPy and CIL. Remember that you will likely need to use a constrained optimisation algorithm, taking the positivity into account. (Even when doing that, some generic algorithm will fail if there is no background term in the acquisition model, as the likelihood can become ill-defined (i.e. infinity) due to the presence of the logarithm)\n",
"### use the log-cosh penalty\n",
"You can do that in 2 ways: using the original De Pierro formulation (i.e. a numerical optimisation) or the Wang&Qi version (i.e. use their formulas to find a quadratic surrogate for the penalty). This would lead to an interesting investigation of using quadratic surrogates works well for very edge preserving penalties."
"You can do that in 2 ways: using the original De Pierro formulation (i.e. a numerical optimisation) or the Wang&Qi version (i.e. use their formulas to find a quadratic surrogate for the penalty). This would lead to an interesting investigation of using quadratic surrogates with edge preserving penalties."
ashgillman marked this conversation as resolved.
Show resolved Hide resolved
]
}
],
Expand Down
24 changes: 17 additions & 7 deletions notebooks/PET/reconstruct_measured_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,8 @@
"outputs": [],
"source": [
"# convert the Siemens Normalisation into STIR format, some keys slightly differ\n",
"# $data_path/20170809_NEMA[_MUMAP]_UCL.n.hdr are the extracted headers from the scanner\n",
"# norm.n.hdr and umap.v.hdr are the output converted files\n",
"!convertSiemensInterfileToSTIR.sh $data_path/20170809_NEMA_UCL.n.hdr norm.n.hdr\n",
"!convertSiemensInterfileToSTIR.sh $data_path/20170809_NEMA_MUMAP_UCL.v.hdr umap.v.hdr\n",
"\n",
Expand All @@ -111,9 +113,15 @@
"!sed -i.bak \"s/\\r\\([^\\n]\\)/\\r\\n\\1/g\" umap.v.hdr\n",
"\n",
"# Now add absolute data path to the header file\n",
"sed_cmd = 's#\\(!name of data file:=\\)#\\\\1{}/#'.format(data_path)\n",
"os.system(\"sed -i.bak -e '{}' umap.v.hdr\".format(sed_cmd))\n",
"os.system(\"sed -i.bak -e '{}' norm.n.hdr\".format(sed_cmd))"
"# This command prepends the data path to the data file so that the header in our working folder points to the data\n",
"# You likely won't need to do this for your own data if the data file is in the same directory.\n",
ashgillman marked this conversation as resolved.
Show resolved Hide resolved
"!sed -i.bak2 -e \"s#\\(!name of data file:=\\)#\\\\1${data_path}/#\" umap.v.hdr\n",
"!sed -i.bak2 -e \"s#\\(!name of data file:=\\)#\\\\1${data_path}/#\" norm.v.hdr\n",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All this seding is probably already confusing to the reader.

I've made the two sed lines more similar to one-another rather that confusingly using jupyter commands on the first one and os.system calls on the second

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming you checked it still worked ok?

Of course, they can be combined in1, but that probably doesn't help the user.

could add a note that this is needed due to limitations of the current convert* scripts in STIR if you think that'd help

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, tested.

this is needed due to limitations of the current convert* scripts

Well normally you'd run this and still have the file in the same directory - I don't think we'd want the convert* scripts to convert to absolute files (makes a pain for network directories, moving data, etc.) anyway. So I think the current comment communicates that

"\n",
"# Advanced: if you'd like to have a look at what changed in the umap, uncomment below\n",
"# Lines starting with < are the original Siemens\n",
"# and lines starting with > are the STIR converted\n",
"# !diff $data_path/20170809_NEMA_MUMAP_UCL.v.hdr umap.v.hdr"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably quite interesting to participants who are familiar enough with diff

]
},
{
Expand All @@ -129,7 +137,7 @@
"metadata": {},
"outputs": [],
"source": [
"!cp $SIRF_PATH/../STIR/scripts/IO/convertSiemensInterfileToSTIR.sh $SIRF_INSTALL_PATH/bin"
"# !cp $SIRF_PATH/../STIR/scripts/IO/convertSiemensInterfileToSTIR.sh $SIRF_INSTALL_PATH/bin"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment it out so that users with trigger-happy fingers don't run it unnecessarily (although it will be needed for the course)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting, is it going to fail on JupyterHub (and docker) due to read-only location? Better to write it in the current dir?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm.. I don't have JupyterHub account to test. Can you give me one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I think that can be a separate issue

]
},
{
Expand Down Expand Up @@ -734,9 +742,11 @@
"se.set_attenuation_image(attn_image)\n",
"se.set_randoms(randoms)\n",
"se.set_asm(asm_norm)\n",
"# Unfortunately, the ScatterEstimator currently needs attenuation correction factors,\n",
"# while we computed the attenuation factors above. So let's invert those.\n",
"acf_factors=attn_factors.get_uniform_copy()\n",
"# Unfortunately, the ScatterEstimator currently needs attenuation \"correction\" factors, which\n",
"# is what we need to muliply by to correct for attenuation, while we computed the attenuation\n",
"# factors above, which is how much attenuation is estimated.\n",
"# Fortunately, these are simply the inverse.\n",
"acf_factors = attn_factors.get_uniform_copy()\n",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if readers will be familiar with the ACF/AF nomenclature?

"acf_factors.fill(1/attn_factors.as_array())\n",
"# I could also have used the following (but it would take more time)\n",
"#asm_attn.normalise(acf_factors)\n",
Expand Down