-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TV-regularised reconstruction and TV-based denoising #596
base: main
Are you sure you want to change the base?
Conversation
📚 Documentation |
class TotalVariationRegularizedReconstruction(DirectReconstruction): | ||
r"""TV-regularized reconstruction. | ||
|
||
This algorithm solves the problem :math:`min_x \frac{1}{2}||(Ax - y)||_2^2 + ||L\nabla x||_1` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe it would be good to distinguish between \lambda * | \nabla x |_1 (for scalar reg parameter) and | \Lambda \nabla x |_1 for an entire lambda-map. Also, strictly speaking, we would need to introduce the notation to have different regularization parameters for different directions, but this could get too complicated and cumbersome for this here?
:math:`\nabla` is the finite difference operator applied to :math:`x`. | ||
""" | ||
|
||
n_iterations: int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to have n_iterations or max_iterations with a tolerance? Our PDHG allows for the second as well.
noise | ||
KNoise used for prewhitening. If None, no prewhitening is performed | ||
dcf | ||
K-space sampling density compensation. If None, set up based on kdata. The dcf is only used to calculate a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to restrict ourselves to this or also allow the problem min_x 1/2*||W^(1/2)(Ax-y)||_2^2?
kdata = prewhiten_kspace(kdata, self.noise) | ||
|
||
# Create the acquisition model A = F S if the CSM S is defined otherwise A = F with the Fourier operator F | ||
acquisition_operator = self.fourier_op @ self.csm.as_operator() if self.csm is not None else self.fourier_op |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
potentially W^(1/2) F C?
r"""TV denoising. | ||
|
||
This algorithm solves the problem :math:`min_x \frac{1}{2}||(x - y)||_2^2 + ||L\nabla x||_1` | ||
by using the PDHG-algorithm. :math:`y` is the target image, :math:`L` is the strength of the regularization and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe better:
:math:y
is the given noisy image
|
||
# %% | ||
data_weight = 0.5 | ||
n_adam_iterations = 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n_admm_iterations
instead of n_adam_iterations
# | ||
# by doing | ||
# | ||
# $x_{k+1} = argmin_x \lambda \| \nabla x \|_1 + \frac{\rho}{2}||x - z_k + u_k||_2^2$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
\argmin
# | ||
# using PDHG. | ||
# | ||
# Because we have 2D dynamic images we can apply the TV-regularization along x,y and time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion:
Because we have 2D dynamic images, we can apply the TV-regularization along x,y, and time.
|
||
# %% [markdown] | ||
# #### TV-Regularized Reconstruction using ADMM | ||
# In the above example we need to apply the acquisition operator during the PDHG iterations which is computationally |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion:
In the above example, PDHG repeatedly applies the acquisition operator and its adjoint during the iterations, which is computationally demanding and hence takes a long time. Another option is to use the Alternating Direction Method of Multipliers (ADMM) [S. Boyd et al, 2011], which solves the general problem
# $u_{k+1} = u_k + x_{k+1} - z_{k+1}$ | ||
# | ||
# The first step is TV-based denoising of $x$, the second step is a regularized iterative SENSE update of $z$ and the | ||
# final step updates the helper variable $u$. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updates the dual variable
Requires #426