0% found this document useful (0 votes)
18 views31 pages

Image Reconstruction in Dynamic Inverse Problems With Temporal Models

This paper surveys variational approaches for image reconstruction in dynamic inverse problems, emphasizing methods that incorporate parametrized temporal models. It discusses various reconstruction techniques, including those using motion models and deformable templates, as well as the integration of deep learning to enhance computational efficiency. The survey highlights the importance of accounting for the dynamic nature of imaged objects in fields such as medical imaging and life sciences.

Uploaded by

likeit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views31 pages

Image Reconstruction in Dynamic Inverse Problems With Temporal Models

This paper surveys variational approaches for image reconstruction in dynamic inverse problems, emphasizing methods that incorporate parametrized temporal models. It discusses various reconstruction techniques, including those using motion models and deformable templates, as well as the integration of deep learning to enhance computational efficiency. The survey highlights the importance of accounting for the dynamic nature of imaged objects in fields such as medical imaging and life sciences.

Uploaded by

likeit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Image Reconstruction in Dynamic Inverse

Problems with Temporal Models

Andreas Hauptmann, Ozan Öktem, and Carola Schönlieb

Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Outline of Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Spatiotemporal Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Reconstruction Without Explicit Temporal Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Reconstruction Using a Motion Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Reconstruction Using a Deformable Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Motion Models Based on Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Physical Motion Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Deformable Templates Given by Diffeomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Flow of Diffeomorphisms and Intensities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Deformable Templates by Metamorphosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Spatiotemporal Reconstruction with LDDMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Data-Driven Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Data-Driven Reconstruction Without Temporal Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . 23

A. Hauptmann
Research Unit of Mathematical Sciences, University of Oulu, Oulu, Finland
Department of Computer Science, University College London, London, UK
e-mail: andreas.hauptmann@oulu.fi
O. Öktem ()
Department of Information Technology, Division of Scientific Computing, Uppsala University,
Uppsala, Sweden
Department of Mathematics, KTH – Royal Institute of Technology, Stockholm, Sweden
e-mail: ozan@kth.se
C.-B. Schönlieb
Department of Applied Mathematics and Theoretical Physics, University of Cambridge,
Cambridge, UK
e-mail: cbs31@cam.ac.uk

© Springer Nature Switzerland AG 2021 1


K. Chen et al. (eds.), Handbook of Mathematical Models and Algorithms in Computer
Vision and Imaging, https://doi.org/10.1007/978-3-030-03009-4_83-1
2 A. Hauptmann et al.

Learning Deformation Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24


Learning Motion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Outlook and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Abstract

This paper surveys variational approaches for image reconstruction in dynamic


inverse problems. Emphasis is on variational methods that rely on parametrized
temporal models. These are encoded here as diffeomorphic deformations with
time-dependent parameters or as motion-constrained reconstructions where the
motion model is given by a differential equation. The survey also includes recent
developments in integrating deep learning for solving these computationally
demanding variational methods. Examples are given for 2D dynamic tomogra-
phy, but methods apply to general inverse problems.

Keywords

Image registration · Indirect registration · Inverse problems · Regularization ·


Tomography · Image reconstruction · Deep learning

Introduction

Dynamic inverse problems in imaging refer to the case when the object being
imaged undergoes a temporal evolution during the data acquisition. The resulting
data in such an inverse problem is a time (or quasi-time) series and due to limited
sampling speed typically highly undersampled. Failing to account for the dynamic
nature of the imaged object will lead to severe degradation in image quality, and
hence there is a strong need for advanced modeling of the involved dynamics by
incorporating temporal models in the reconstruction task.
The need for dynamic imaging arises, for instance, in various tomographic
imaging studies in medicine, such as imaging moving organs (respiratory and
cardiac motion) with computed tomography (CT) (Kwong et al. 2015), positron
emission tomography (PET), or magnetic resonance imaging (MRI) (Lustig et al.
2006), and in functional imaging studies by means of dynamic PET (Rahmim et al.
2019) or functional MRI (Glover 2011). In functional imaging studies, the dynamic
information is crucial for the diagnostic value to assess functionality of organs
or tracking an injected tracer. Spatiotemporal imaging also arises in life sciences
(Mokso et al. 2014) where it is crucial to understand dynamics and interactions
of organisms. Lastly, applications in material sciences (De Schryver et al. 2018;
Ruhlandt et al. 2017) and process monitoring (Chen et al. 2018) rely on the
capabilities of dynamic image reconstruction.
Mathematically, solving dynamic inverse problems in imaging or spatiotemporal
image reconstruction aims to recover a time-dependent image from a measured time
Image Reconstruction in Dynamic Inverse Problems 3

series. Since the measured time series is typically highly undersampled in each
time instance, the reconstruction task is ill-posed, and additional prior knowledge is
needed to recover a meaningful spatiotemporal image. One such prior assumption
can be made on the type of dynamics in the studied object, which can regularize the
reconstruction task by penalizing unrealistic motion.
There are various approaches in the literature for solving dynamic inverse
problems. In this paper, we focus on variational models for this task which
occupy a relatively large space in this context in the literature. Here, we identify
two subgroups: those variational approaches which incorporate prior temporal
information in the regularizer without a physical motion model but as a smoothness
prior, e.g., as in Niemi et al. (2015) for slowly evolving images, and those variational
approaches which incorporate prior temporal information in the model by motion
constraints characterized either by an evolutionary PDE for the reconstruction or by
a registration approach with a time-dependent deformation operator that is applied
to a template.
The former, variational methods with a temporal smoothness prior, are applicable
to a wide range of dynamic inverse problems as outlined in Schmitt and Louis (2002)
and Schmitt et al. (2002). Indeed, the absence of an explicit motion constraint makes
these methods more generally applicable. Some imaging-related applications are
Feng et al. (2014), Lustig et al. (2006), and Steeden et al. (2018) for spatiotemporal
compressed sensing in dynamic MRI. Here, the temporal regularity is enforced by
a sparsifying transform (or total variation). Further examples are μCT imaging of
dynamic processes (Bubba et al. 2017; Niemi et al. 2015) and process monitoring
with electrical resistance tomography (Chen et al. 2018).
The latter, variational methods featuring explicit motion models, can be divided
in two categories. The first ones model the motion as an evolutionary PDE (Burger
et al. 2017, 2018; Dirks 2015; Frerking 2016) using optical flow (Horn and Schunck
1981) or a continuity equation (Burger et al. 2018; Lang et al. 2019a), either
as a constraint or in the form of a penalty term in the variational reconstruction
model. Some prominent applications of this approach are in dynamic photoacoustic
tomography (Lucka et al. 2018) and 3D computed tomography (Djurabekova et al.
2019), just to name a few. The second one parametrizes the dynamics in the form of
a time-dependent diffeomorphic deformation operator (Younes 2019). Examples for
such deformation models are LDDMM (Beg et al. 2005; Miller et al. 2006; Trouvé
and Younes 2015) and metamorphosis (Younes 2019, Chapter 13). Dynamic image
reconstruction is then modeled as an indirect registration task, as in Gris et al. (2020)
with metamorphosis or Chen et al. (2019) and Lang et al. (2019b) using LDDMM.
See also Yang et al. (2013) and Chen and Öktem (2018) for surveys on this topic.
Recently, deep neural network approaches have also entered the picture as a
mean to approximate the solution to the computationally demanding variational
approaches discussed above. Examples for these are Schlemper et al. (2017),
Hauptmann et al. (2019), and Kofler et al. (2019) for dynamic image reconstruction
without incorporating physical motion models and Qin et al. (2018), Liu et al.
(2019), and Pouchol et al. (2019) for learned indirect registration approaches.
4 A. Hauptmann et al.

Outline of Survey

The survey focuses on variational methods for recovering a tomographic image that
undergoes temporal evolution.
Section “Spatiotemporal Inverse Problems” is an overview of various approaches
for reconstruction in such a setting. It starts with a mathematical formalization
of a spatiotemporal inverse problem that is given as the task of solving an (time
dependent) operator equation. This is followed by specifying various variational
approaches for reconstruction that differ according to how the temporal model is
specified. Section “Reconstruction Without Explicit Temporal Models” outlines a
setup of a variational approach for reconstruction in a setting when one lacks an
explicit temporal model resulting in (4). Such an approach is however not further
explored in this survey; instead, focus is on a setting where there is an explicit
temporal model and here the survey considers two variants.
In the first (section “Reconstruction Using a Motion Model”), the temporal model
is given as the solution to an operator equation with a time-dependent parameter
as in (7). The resulting variational model for reconstruction can be expressed as
in (13). Section “Motion Models Based on Partial Differential Equations” further
develops this formulation by considering partial differential equation (PDE)-based
formulations.
In the second (section “Reconstruction Using a Deformable Template”), the
temporal model is given by applying a parametrized deformation operator to a
template in which the parameter is time dependent. This results in a temporal
model of the form (15) that can be incorporated into a variational approach for
reconstruction as in (17). This is followed by an outline of two approaches when data
is time discretized. Section “Deformable Templates Given by Diffeomorphisms”
builds on these approaches by considering explicit diffeomorphic deformation
operators given by solving a flow equation.
As already stated, section “Motion Models Based on Partial Differential Equa-
tions” outlines how PDE-based motion models can be used for spatiotemporal
reconstruction through (13). Likewise, section “Deformable Templates Given by
Diffeomorphisms” outlines approaches based on (17) in which the deformation
operator is given by solving an ordinary differential equation (ODE).
Section “Data-Driven Approaches” reviews data-driven approaches that have
been developed for improving upon the computational feasibility of the variational
models in section “Deformable Templates Given by Diffeomorphisms” and “Motion
Models Based on Partial Differential Equations”. In particular, section “Data-Driven
Reconstruction Without Temporal Modelling” outlines data-driven methods that
can be viewed as building on section “Reconstruction Without Explicit Temporal
Models”. Similarly, one can see section “Learning Motion Models” as a data-driven
extension of sections “Reconstruction Using a Motion Model” and “Motion Models
Based on Partial Differential Equations” and section “Learning Deformation Oper-
ators” as a data-driven extension of the methods in sections “Reconstruction Using
a Deformable Template” and “Deformable Templates Given by Diffeomorphisms”.
Image Reconstruction in Dynamic Inverse Problems 5

The survey ends with an outlook and conclusions (section “Outlook and Conclu-
sions”).

Spatiotemporal Inverse Problems

The starting point is to mathematically formalize the notion of a spatiotemporal


inverse problem, which refers to the task of recovering a time-dependent image
from (time-dependent) noisy indirect observations (Schmitt and Louis 2002).
Image: The time-dependent image is formally represented by a function
f : [0, T ] × Ω → Rk where k is the number of image channels (k = 1 for
gray scale images) and Ω ⊂ Rd is the image domain.
We henceforth assume f (t, ·) ∈ X where X (reconstruction space) is some
vector space of Rk -valued functions on Ω ⊂ Rd that, unless otherwise stated, is
a Hilbert space under the L2 -inner product.

Data: Data is represented by a time-dependent function g : [0, T ] × M → Rl


where M is some manifold that is defined by the acquisition geometry and l
is the number of data channels. Likewise, we assume that g(t, ·) ∈ Y where
Y (data space) is some vector space of Rl -valued functions on M that, unless
otherwise stated, is a Hilbert space under the L2 -inner product. Actual measured
data represents a digitization of this function by sampling on [0, T ] × M.

Spatiotemporal inverse problem: This is the task of recovering a temporal image


t → f (t, ·) ∈ X from time series data t → g(t, ·) ∈ Y where
 
g(t, ·) = A t, f (t, ·) (t, ·) + e(t, ·) on M for t ∈ [0, T ]. (1)

Note here that A(t, ·) : X → Y is a (possibly time-dependent) forward operator.


It models how an image f (t, ·) at time t gives rise to data g(t, ·) at time t in
the absence of noise or measurement errors. The observation noise in data is
accounted for by e(t, ·) ∈ Y , which can be seen as a single random realization of
a Y -valued random variable that models measurement noise.

Remark 1. The formulation in (1) also covers cases when noise in data depends on
the signal strength, like Poisson noise. Simply
 assume  e(t, ·) in (1) is a sample of
the random variable e(t, ·) := g(t, ·) − A t, f (t, ·) where g(t, ·) is the Y -valued
random variable generating data.

Special cases of (1) arise depending on how the time dependency enters into
the problem. In particular, the following three components can depend on time
independently of each other:

(a) Forward operator: The forward model may depend intrinsically on time.
6 A. Hauptmann et al.

(b) Data acquisition geometry: The way the forward operator is sampled has a
specific time dependency.
(c) Image: The image to be recovered depends on time.

Next, an important special case is when data in (1) is observed at discrete time
instances 0 ≤ t0 < . . . < tn ≤ T ; see also Schmitt and Louis (2002). Then, (1)
reduces to the task of recovering images fj ∈ X from data gj ∈ Y where

gj = A j (fj ) + ej for j = 1, . . . , n. (2)

In the above, we have made use of the following notation for j = 1, . . . , n:

gj := g(tj , ·) ∈ Y fj := f (tj , ·) ∈ X
 (3)
ej := e(tj , ·) ∈ Y A j := A tj , ·) : X → Y.

Reconstruction Without Explicit Temporal Models

The inverse problem in (1) is almost always ill-posed, so solving it requires


regularization regarding both the spatial and temporal variation of the image. A
variational approach for reconstructing the image trajectory t → f (t, ·) that does
not use any explicit temporal model reads as

    
T   
arg min L A t, f (t, ·) , g(t, ·) + Jθ t, f (t, ·) dt. (4)
t→f (t,·)∈X 0

Here, L : Y × Y → R is the data fidelity term (data-fit), which is ideally chosen as


an appropriate affine transform of the negative log-likelihood of data (Bertero et al.
2008). The term Jθ : X → R is a parametrized regularizer that accounts for a priori
knowledge about the image. It is common to separately regularize the spatial and
temporal components, e.g., by considering
     
Jθ t, f (t, ·) := S γ f (t, ·) + Tτ ∂t f (t, ·) for θ = (γ , τ ).

In the above, S γ : X → R is a spatial regularizer, and Tτ : X → R is a temporal


regularizer. The spatial regularizer is commonly of the form S γ := γ S where γ >
0 and S : X → R is some “energy” functional. There is a well-developed theory for
how to choose the latter in order to promote solutions of an inverse problem with
specific type of regularity, e.g., a suitable choice for H1 (Ω)-regularity is

2
S(f ) := ∇f (x) dx. (5)
Ω
Image Reconstruction in Dynamic Inverse Problems 7

On the other hand, if the image has edges that need to be preserved, then BV(Ω)-
regularity is more natural and a total variation (TV)-regularizer is a better choice
(Rudin et al. 1992). This regularizer is for f ∈ W 1,1 (Ω) expressible as


S(f ) := ∇f (x) dx. (6)
Ω

Other choices may include higher order terms to the total variation functional, like
in total generalized variation; see Benning and Burger (2018) and Scherzer et al.
(2009) for a survey.
The choice of temporal regularizer is much less explored. This functional
accounts for a priori temporal regularity. Similarly to (5) one can here think of a
smoothness prior (Niemi et al. 2015) for slowly evolving images


2
T(∂t f ) := ∂t f (x) dx, (7)
Ω

or a total variation type of penalty (Feng et al. 2014) for changes that are small or
occur stepwise (image changes stepwise). The regularizer (7) acts pointwise in time,
and full temporal dependency is obtained by integrating over time in (4).
Methods for solving (1) based on (4) can be used when there is no explicit
temporal model that connects images and data across time. Hence, such methods
are applicable to a wide range of dynamic inverse problems as outlined in
Schmitt and Louis (2002) and Schmitt et al. (2002). More specific imaging-related
applications are Feng et al. (2014), Lustig et al. (2006), and Steeden et al. (2018) for
spatiotemporal compressed sensing in dynamic MRI. Here, the temporal regularity
is enforced by a sparsifying transform (or total variation). Further examples are μCT
imaging of dynamic processes (Bubba et al. 2017; Niemi et al. 2015) and process
monitoring with electrical resistance tomography (Chen et al. 2018).

Remark 2. When data is time discretized, then one also has the option to consider
reconstructing images at each time step independently. An example of this is to
recover the image at tj by using a variational regularization method, i.e., as fj ≈ fj
where

 
fj := arg min L A j (f ), gj + S γj (f ) for j = 1, . . . , n. (8)
f ∈X

Our emphasis will henceforth be on methods for solving (1) that utilize more
explicit temporal models.
8 A. Hauptmann et al.

Reconstruction Using a Motion Model

The idea here is to assume that a solution t → f (t, ·) ∈ X to (1) has a time evolution
that can be modeled by a motion model. Restating this assumption mathematically,
we assume there is an operator Ψ : [0, T ] × X → X (motion model) such that
 
Ψ t, f (t, ·) = 0 on Ω whenever t → f (t, ·) solves (1). (9)

Hence, (1) can be rephrased as the task of recovering the image trajectory t →
f (t, ·) ∈ X along with its motion model Ψ : [0, T ] × X → X from time series data
t → g(t, ·) ∈ Y where
 
g(t, ·) = A t, f (t, ·) (t, ·) + e(t, ·) on M
 
s.t. Ψ t, f (t, ·) = 0 on Ω. for t ∈ [0, T ]. (10)

Parametrized Motion Models


An important special case is when the motion model depends only on time through
a time-dependent parameter, i.e., there is Ψθ : X → X for θ ∈ Θ such that
 
Ψθt f (t, ·) = 0 on Ω whenever t → f (t, ·) solves (1), (11)

for some t → θt . Then, (1) can be rephrased as the task to recover t → f (t, ·) ∈ X
along with motion parameter t → θt ∈ Θ from time series data t → g(t, ·) ∈ Y
where
 
g(t, ·) = A t, f (t, ·) (t, ·) + e(t, ·) on M
 
s.t. Ψθt f (t, ·) = 0 on Ω. for t ∈ [0, T ]. (12)

The assumption in (11) may act as a regularization since it introduces a model


for how images vary across time. In particular, the inverse problem in (12) is
challenging but still easier to handle than the one in (1). However, solving (12)
will still most likely require regularization. Approaches surveyed in section “Motion
Models Based on Partial Differential Equations” represent different ways for doing
this based on the setting where Ψθ : X → X is given as a differential operator
(involving differentiation in both temporal and spatial variables). Then parameter
set Θ is a vector space of vector fields θ : Ω → Rd with sufficient regularity, so
θt corresponds to a velocity field. With these assumptions, (11) is a differential
equation that constrains the temporal evolution of the solution to (1), and (12)
corresponds to reconstructing the image jointly with its motion model.

General Variational Formulation


It is quite natural to adopt a variational approach for solving (12), cf. Burger et al.
(2018). In fact, many of the state-of-the-art methods are of the form
Image Reconstruction in Dynamic Inverse Problems 9

 T      
arg min L A t, f (t, ·) , g(t, ·) + Tτ (t, θt ) + S γ (f (t, ·)) dt .
f (t,·)∈X 0
θt ∈Θ
 
s.t. Ψθt f (t, ·) = 0, for t ∈ [0, T ].
(13)
Just as for (4), one here needs to choose S γ : X → R (spatial regularizer) and
Tτ (t, ·) : X → R (temporal regularizer), whereas L : Y × Y → R is derived from
a statistical model for the noise in data.
In practice, the hard constrained formulation might be too restrictive, and we
rather aim to solve a penalized version, where the motion constraint is incorporated
as a regularizer; see section “Motion Models Based on Partial Differential Equa-
tions” for further detials. Next, for data that is time discretized, the formulation in
(13) reduces to a series of reconstruction and registration problems that are solved
simultaneously. Practically, the optimization is usually performed in an alternating
way, where first a dynamic reconstruction f (t, ·) for t ∈ [0, T ] is obtained, followed
by an update of the motion parameters t → θt . This alternating minimization
procedure is then iterated until a convergence criterion is fulfilled (Burger et al.
2018). Interpreted in a Bayesian setting, this approach compares to smoothing
(Burger et al. 2017).

Reconstruction Using a Deformable Template

The idea here is that when solving (1), the temporal model for t → f (t, ·) ∈ X
is given by deforming a fixed (time-independent) template f0 ∈ X using a time-
dependent parametrization of a deformation operator.

Deformation Operators
To formalize the underlying assumption in reconstruction with a deformable
template, we assume there is a fixed family {W θ }θ∈Θ of mappings (deformation
operators)

Wθ : X → X for θ ∈ Θ. (14)

Next, we assume that

f (t, ·) = W θt (f0 ) on Ω whenever t → f (t, ·) solves (1), (15)

for some t → θt ∈ Θ and f0 ∈ X. Then, (1) can be rephrased as the inverse problem
of recovering f0 ∈ X and t → θt ∈ Θ from time series data g(t, ·) ∈ Y where

 
g(t, ·) = A t, W θt (f0 ) + e(t, ·) on M for t ∈ [0, T ]. (16)
10 A. Hauptmann et al.

The assumption in (15) may act as a regularization since it introduces a model


for how images vary across time. In particular, the inverse problem in (16) is
challenging but still easier to handle than the one in (1). However, solving (16) will
still most likely require regularization. Variational approaches are suitable for this
purpose, but these typically involve optimization over the parameter set Θ so it is
desirable to ensure Θ has a vector space structure. Section “Deformable Templates
Given by Diffeomorphisms” surveys different approaches for solving (16) based on
the setting where the deformation operator is a diffeomorphic deformation.

Remark 3. Comparing assumption (15) with (9), we see that they are equivalent if
 
Ψ t, W θt (f0 ) = 0 holds on Ω for t ∈ [0, T ].

Hence, it is sometimes possible to view a motion model as deforming a template


using a deformation operator with time-dependent parametrization. Likewise, a
deformation operator with a time-dependent deformation acting on a template gives
rise to a motion model.

General Variational Formulation


Following Chen et al. (2019), a variational approach for solving (16) can be
formulated as
     
T   
arg min L A t, W θt (f0 ) , g(t, ·) + Tτ (t, θt ) + S γ W θt (f0 ) dt .
f0 ∈X 0
t→θt ∈Θ
(17)
This is very similar to (4) with L : Y × Y → R denoting the data fidelity term and
the regularization term are a sum of a spatial and temporal regularizer:

S γ : X → R and Tτ (t, ·) : Θ → R.

The choice of the spatial regularizer S γ is a well-explored topic as outlined in


section “Reconstruction Without Explicit Temporal Models”. In contrast, how to
choose an appropriate temporal regularizer Tτ is less explored and closely linked
to assumptions on t → θt , which governs the time evolution of the image; see, e.g.,
section “Spatiotemporal Reconstruction with LDDMM” for an example.

Time Discretized Data


There are different strategies for solving (16) when data is time discretized. They
differ depending on how the time discretized version is formulated and in particular
on how the initial template f0 is used for building up the images fj by means of a
deformable template model.
Independent trajectory: The time discretized version of (16) is formulated as the
task of recovering f0 ∈ X and θj ∈ Θ from data gj ∈ Y where
Image Reconstruction in Dynamic Inverse Problems 11

 
gj = A j W θj (f0 ) + ej for j = 1, . . . , n. (18)

In the above, Wθj : X → X registers the initial template image f0 ∈ X against


a target image fj ∈ X that is indirectly observed through data gj ∈ Y . In
particular, the trajectory t → f (t, ·) is made up of images f (tj , ·) = fj :=
Wθj (f0 ) that are generated independently from each other by deforming the
initial template f0 .
One approach for solving (18) is to compute fj := W θj (f0 ) where

n  
   
(f0 , θ1 , . . . , θn ) ∈ arg min L A j W θj (f0 ) , gj
f0 ∈X j =1
θ1 ,...,θn ∈Θ 
 
+ Tτ (θj ) + S γ W θj (f0 ) . (19)

Note that the choice of T : Θ → R may introduce a dependency between fj


and fk for j = k even though fj and fk only depend on each other through the
template f0 .
Single trajectory: Here the template f0 is only used once to generate the image at
t1 ; the sequence of images at t2 , . . . , tn that make up the trajectory t → f (t, ·)
are generated sequentially. The time discretized version of (16) now reduces to
the task of recovering f0 ∈ X and θj ∈ Θ from data gj ∈ Y where
 
gj = A j W θj (fj −1 ) + ej for j = 1, . . . , n. (20)

In contrast to (18), W θj : X → X is used here to deform fj −1 ∈ X (image at


time step tj −1 ) to the target image fj ∈ X that is indirectly observed through
data gj ∈ Y . Note that one can rewrite (20) as
 
gj = A j (W θj ◦ . . . ◦ W θ1 )(f0 ) + ej for j = 1, . . . , n. (21)

One can attempt at solving (20) by the following intertwined scheme:


⎧  

⎪ =
⎪ 0

f arg min L A 1 (f ), g 1 + J(f )

⎪ f ∈X  

⎨θj ∈ arg min L A j  W θ (fj −1 ), gj

θ∈Θ for j = 1, . . . , n.

⎪  

⎪ + Tτ (θ ) + S γ Wθ (fj −1 )




⎩fj := W (fj −1 )
θj
(22)
Note that recursive time-stepping schemes of the above type can be related to
filtering approaches in a Bayesian setting (see, for instance, Hakkarainen et al.
(2019) for an application to dynamic X-ray tomography).
12 A. Hauptmann et al.

Motion Models Based on Partial Differential Equations

In some applications, it is reasonable to assume that the underlying motion is


governed by a physical phenomena that can be described by a suitable equation,
like a PDE. Such an equation can then be used to constrain the motion of the
reconstructed target image. Focus here is therefore on joint reconstruction and
motion estimation as formulated in (13). It has been shown that a joint approach
that simultaneously recovers the image sequence and the motion offers a significant
advantage over subsequently and separately applying both methods (Burger et al.
2018).

Physical Motion Constraints

A common model for motion is given by the transport equation



⎪  
⎨ ∂f (t, x) + ∇ · ν(t, x)f (t, x) = 0,
∂t for x ∈ Ω and t ∈ [0, T ]. (23)

⎩f (0, x) = f (x)
0

Here, f (t, ·) : Ω → R is the spatiotemporal image at time t contained in X, and the


velocity field ν(t, x) : Ω → Rd models the velocity with which points at x move at
time t. The motion model is then given by the underlying equation in (23), which in
turn yields the motion constraint

  ∂f  
Ψν f (t, ·) := (t, ·) + ∇ · ν(t, ·)f (t, ·) = 0 on Ω ⊂ Rd . (24)
∂t

This equation is generally referred to as continuity equation and it assumes mass


preservation. Hence, with this model, mass can only be continually transformed,
and no mass can be created, destroyed, or teleported.
A more restrictive model can be directly obtained from (24) under the assumption
of incompressible flows or in our context brightness constancy. We give here an
alternative derivation, assuming a constant image intensity f (t, x) along a trajectory
t → x(t) with velocity ẋ(t) = ν(t, x); thus, we obtain

df ∂f  ∂f dxi
d
0= = + = ∂t f + ∇f · ν. (25)
dt ∂t ∂xi dt
i=1

This equation is also called the optical flow constraint, and it is a popular approach
to model motion between consecutive images (Horn and Schunck 1981). In the
following, we will base the motion-constrained reconstruction as formulated in (13)
on the continuity equation (24), assuming either mass conservation or the stronger
assumption of brightness constancy in the form of the optical flow model. For both
Image Reconstruction in Dynamic Inverse Problems 13

models, the time-dependent parametrization of the motion model is by velocity


fields, i.e., the motion model is given as Ψθt f (t, ·) where θt := ν(t, ·) for some
sufficiently regular velocity field ν : [0, T ] × Ω → Rd (motion field). Henceforth,
we use the notation Ψν := Ψθt .

Joint Motion Estimation and Reconstruction


A joint model for motion estimation and tomographic reconstruction can be
formulated, based on the motion-constrained model in (13) and following Burger
et al. (2018) and Dirks (2015), for p ∈ {1, 2} and q, r > 1 as

   p 
T 1    q r
arg min  A t, f (t, ·) − g(t, ·) + α f (t, ·) BV
+ β ν(t, ·) BV
dt,
t→f (t,·)∈X 0 p p
t→ν(t,·)∈V
 
s.t. Ψν f (t, ·) = 0 on Ω ⊂ Rd .
(26)

Here we use for both image sequence and vector field the respective total variation
as a regularizer, given by the semi-norm in the space of bounded variation.
Consequently, given fixed domain Ω ⊂ Rd , the spaces under consideration here are
X = BV(Ω, R) for the reconstructions and V = BV(Ω, Rd ) for the corresponding
vector field. Other models can be considered such as L2 -regularizer for the mass
conservation or other convex regularizer (see Burger et al. 2018; Dirks 2015 for
details). We furthermore assume the forward operator A(t, ·) : X → Y to be a
bounded linear operator to some Hilbert space Y . In particular, it can be time-
dependent (Burger et al. 2017; Frerking 2016).
The motion constraint in (24) is used to describe how image sequence and
vector fields are connected. From the perspective of tomographic reconstructions,
the motion constraint acts as an additional temporal regularizer along the motion
field ν. Instead of imposing the motion constraint exactly as in (26), we can also
relax it and add as a least-squares term to the functional itself, cf. Burger et al.
(2018).
In order to establish existence of minimizers of (26), we need to ensure appro-
priate weak-star compactness of sublevel sets and lower semicontinuity. We will
restrict the following results here now to dimension d = 2. For the minimization,
we consider the space

   
D := (f, ν) ∈ Lmin{p,q} [0, T ]; X × Lr [0, T ]; V |

ν ∞ ≤ cv < ∞ and ∇ · ν E ≤ cd , (27)

where E above denotes a Banach space continuously embedded into


Lm ([0, T ]; Lk (Ω, Rd )), k > p, and m > q ∗ with q ∗ being the Hölder conjugate
14 A. Hauptmann et al.

of p. We can now state an existence result for the joint model (26) that is proven in
Burger et al. (2018).

Theorem 1 (Existence of minimizers to (26)). Given a linear forward operator


A(t, ·) : X → Y , p ∈ {1, 2} and dimension d = 2, let 1 < q, r and

 1   p 
T
   q
J(f, ν) := A t, f (t, ·) − g(t, ·) + α|f (t, ·)|BV + β|ν(t, ·)|rBV dt.
0 p p

Furthermore, let A be such that it does not eliminate constants, i.e., A(t, 1) = 0
for all t ∈ [0, 1]. Then, there exists a minimizer of J(f, ν) in the constraint set

 
S := (f, ν) ∈ D | Ψν (f ) = 0 where D is given as in (27).

The proof for p = 2 follows from Dirks (2015) and Burger et al. (2018), and the
case for p = 1 follows similar arguments as outlined in Frerking (2016). Existence
for the unconstrained case is proved by incorporating the constraint as a penalty term
in the functional J as shown in Burger et al. (2018). We note here that the choice
q, r > 1 has to be made in the analysis in order to avoid dealing with measures in
time. In the computational use cases considered below, it is however reasonable to
set q = r = 1.

Implementation and Reconstruction


For computational reasons, as well as to allow slight deviations from the motion
model, it is advantageous to consider a penalized version instead of the constrained
formulation (26). Then the joint minimization problem for spatiotemporal recon-
structions can be written as (Burger et al. 2017, 2018)

 1   p
T
  
arg min A t, f (t, ·) − g(t, ·)
t→f (t,·)∈X 0 p p
t→ν(t,·)∈V
   
 
+ α|f (t, ·)|BV + γ Ψν f (t, ·)  + β|ν(t, ·)|BV dt, (28)
1

where convergence to the constrained model is given for γ → ∞. In practice, the


BV-semi-norm is replaced by the discrete isotropic total variation.
As the penalized formulation depends on the motion model Ψν (f ), the energy
to be minimized is nonlinear and therefore non-convex. Additionally, it is non-
differentiable due to the involved L1 -norms, and hence the computation of a solution
to (28) is numerically challenging. Thus, in practice, it is advised to compute
solutions using an intertwined scheme, which means that we split the joint model
into two alternating optimization problems, one for f and the other for ν:
Image Reconstruction in Dynamic Inverse Problems 15

 1 
T    
f k+1 = arg min A(t, f ) − g p + α|f |BV + γ Ψ k (f ) dt
p ν 1
t→f (t,·)∈X 0 p
(29)
 T   
  β
ν k+1 = arg min Ψν (f k+1 ) + |ν|BV dt. (30)
t→ν(t,·)∈V 0 1 γ

Most importantly, both subproblems are now linear and convex, but we note that
the solution of the alternating scheme might correspond to local minima of the
joint model. In practice, one would initialize f 0 = 0 and ν = 0, and then
the first minimization problem for f 1 corresponds to a classic total variation
regularized solution for each image time instance separately followed by a motion
estimation. Reconstructions from Burger et al. (2017) using this alternating scheme
for experimental μCT data are shown in Fig. 1 and an illustration of the influence of
Lp -norms in the data fidelity in Fig. 2.
One can use any optimization algorithm that supports non-differentiable terms
for computing solutions to each of the subproblems (29) and (30). In dimension
d = 2, one could simply use a primal-dual hybrid gradient scheme (Chambolle and
Pock 2011) as outlined in Burger et al. (2017) (see also Aviles-Rivero et al. 2018);

Fig. 1 Reconstructions from Burger et al. (2017) of experimental X-ray data using the approach
in (28) with an optical flow constraint. Top row shows the ground-truth spatiotemporal image, and
bottom row shows data and reconstruction for three sampling schemes
16 A. Hauptmann et al.

Fig. 2 Reconstruction results for the random sampling with both p = 1, 2 for the fidelity term
in (28) for time points 17 and 25. The left images show that L1 -norm clearly favors sparse
reconstructions with a resulting sparse motion field. In contrast, the L2 -norm shown in the right
favors smoother reconstructions and motion fields

here, both applications use the optical flow constraint (25). In higher dimensions
where the computational burden of the forward operator becomes more prevalent,
it is advised to consider other schemes with fewer operator evaluations, and we
refer to Lucka et al. (2018) for an application to dynamic 3D photoacoustic
tomography as well as Djurabekova et al. (2019) for dynamic 3D computed
tomography.
To conclude this section, we mention that in other applications, it might be more
suitable to require mass conversation using the continuity equation instead (see, for
instance, Lang et al. 2019a).

Deformable Templates Given by Diffeomorphisms

The reconstruction methods described here aim to solve (16) using deformable
templates (section “Reconstruction Using a Deformable Template”).
Images are elements in the Hilbert space X := L2 (Ω, R) for some fixed bounded
domain Ω ⊂ Rd . The deformation operator is given by acting with diffeomorphisms
on images. Hence, let Diff(Ω) denote the group of diffeomorphisms (with compo-
sition as group law), and (φ, f0 ) → φ.f0 denotes the (group) action of Diff(Ω) on
X. In imaging, there are now two natural options:
Geometric group action: This group action simply moves image intensities with-
out changing their gray scale values, which correspond to shape deformation:

φ.f0 := f0 ◦ φ −1 for φ ∈ Diff(Ω) and f0 ∈ X. (31)


Image Reconstruction in Dynamic Inverse Problems 17

Mass-preserving group action: Image intensities are allowed to change, but one
preserves the total mass:

φ.f0 := Dφ −1 (f0 ◦ φ −1 ) for φ ∈ Diff(Ω) and f0 ∈ X. (32)

The second key component is to describe how the deformation operator is


parametrized, which here becomes a parametrization of the (sub)group of
diffeomorphisms that are of interest. Much of the theory is motivated by image
registration, and registation can in this setting be formulated as an optimization over
Θ, so the chosen parametrization is preferably an element in a vector space Θ.

Flow of Diffeomorphisms and Intensities

The starting point in the LDDMM framework for image registration is to


parametrize diffeomorphisms by a suitable Banach/Hilbert space of vector fields
Θ = V ⊂ C01 (Ω, Rd ). Diffeomorphisms in this parametrized family GV are
obtained by solving a flow equation (33) that is parametrized by a vector field in
Θ = V.
To more precisely define GV , we consider solutions to the flow equation below
for a given velocity field ν : [0, T ] × Ω → Ω:

⎪  
⎨ d φ(t, x) = ν t, φ(t, x)
dt for x ∈ Ω and t ∈ [0, T ]. (33)

⎩φ(0, x) = x

Next, let L1 ([0, T ], V ) denote the vector space of mappings ν : [0, T ] × Ω → Rd


(velocity fields) where ν(t, ·) ∈ V . If V is admissible, then (33) has diffeomorphic
solutions at any time 0 ≤ t ≤ 1 whenever ν ∈ L1 ([0, T ], V ) (Younes 2019,
Theorem 7.11 and Arguillere et al. 2015). Then, we can define φs,t ν : Rd → Rd

as

ν
φs,t := φ(t, ·) ◦ φ(s, ·)−1 for s, t ∈ [0, T ] and φ(t, ·) solving (33). (34)

This is a diffeomorphism for any 0 ≤ s, t ≤ 1, so GV defined below becomes a


subgroup of diffeomorphisms parametrized by V :

ν
GV := φ : Rd → Rd : φ = φ0,T for some ν ∈ L1 ([0, T ], V ) . (35)

Remark 4. GV is actually a subgroup of Diff1,∞


0 (Ω) (Younes 2019, Theorem 7.16)
p,∞
where Diff0 (Ω) is the group of p-diffeomorphisms that tend to the identity at
infinity:
18 A. Hauptmann et al.

p,∞  p 
Diff0 (Ω) := φ ∈ Diffp,∞ (Ω) : φ − Id ∈ C0 (Ω, Rd ) .

p p,∞
Next, if V is embedded in C0 (Ω, Rd ), then GV is a subgroup of Diff0 (Ω).

Metamorphosis (Younes 2019, Chapter 13) is an extension of LDDMM in the


sense that it considers a flow equation that jointly evolves shape and intensities:

⎪ d ν,ζ  ν 

⎪ It (x) = ζ t, φ0,t (x)


⎨ dt
ν,ζ
I0 (x) = f0 (x) for x ∈ Ω and t ∈ [0, T ]. (36)





⎩φ ν ∈ G is given by (34)
0,t V

ν,ζ
One can show that (36) has a unique solution t → (φ0,t
ν ,I
t ) ∈ GV × X (Trouvé
and Younes 2005; Charon et al. 2018), so the above construction can be used for
deforming images.

Deformable Templates by Metamorphosis

The aim here is to solve (16) with time discretized data. Following Gris et al. (2020),
the idea is to adopt the independent trajectory approach outlined in section “Time
Discretized Data”, so the inverse problem can be reformulated as a sequence of
indirect registration problems (18). Hence, the task reduces to recovering and
matching a template f0 independently to data gj in the sense of joint reconstruction
and registration (indirect registration). One could here consider various approaches
for indirect registration (see Yang et al. 2013; Chen and Öktem 2018 for surveys),
and Gris et al. (2020) uses metamorphosis for this step.
The above considerations lead to the following variational formulation:

 

n
  
(θ1 , . . . , θn ) ∈ arg min L Aj W θj (f0 ) , gi + λ ν 2
2 +τ ζ 2
2 .
θ1 ,...,θn ∈V ×X i=1
(37)
The template f0 ∈ X and data g1 , . . . , gn ∈ Y are related to each other as in
(2), and the deformation operator W θj : X → X, which is parametrized by θj :=
(ν(tj , ·), ζ (tj , ·)) ∈ V × X, is given by the metamorphosis framework as

ν ν,ζ ν ν,ζ
Wθj (f0 ) := φ0,t .I
i ti
where (φ0,t , It ) ∈ GV × X solves (36). (38)

The group action in (38) is usually the geometric one in (31).


The approach taken in Gris et al. (2020) is based on solving (37) by a scheme
that intertwines updates of the image with updates of the deformation parameter.
The latter involves solving an indirect registration problem, and a key part of Gris
Image Reconstruction in Dynamic Inverse Problems 19

et al. (2020) is to show that indirect registration by metamorphosis has a solution


(Gris et al. 2020, Proposition 4) (existence) that is continuous w.r.t. data (Gris et al.
2020, Proposition 5) (stability) and convergent (Gris et al. 2020, Proposition 6). As
such, the updates of the deformation parameter by metamorphosis-based indirect
registration is a well-defined regularization method in the sense of Grasmair (2010).
Likewise, the updates of the image are by a variational method that defines a well-
defined regularization method, so both updates of the intertwined scheme for solving
(37) are by regularization methods.
Figure 3 shows results of the above method applied to (gated) 2D tomographic
data with a spatiotemporal target image. We see that (37) can be used for
spatiotemporal reconstruction even when (gated) data is highly undersampled and
incomplete. In particular, one can recover the evolution of the target regarding
both shape deformation and photometric changes. The latter manifests itself in the
appearance of the white disc.

Spatiotemporal Reconstruction with LDDMM

The aim here is to solve (16) with time continuous data by a variational formulation
of the type (17). Following Chen et al. (2019), W θt : X → X in (17) (deformation
operator) is given by the LDDMM framework, so it is parametrized by θt :=
ν(t, ·) ∈ V for some ν ∈ L2 ([0, T ], V ) as

ν ν
W θt (f0 ) := φ0,t .f0 for f0 ∈ X and φ0,t ∈ GV as in (34). (39)

The variant of (17) considered by Chen et al. (2019) is now


      
T   t  2
arg min L A t, W θt (f0 ) , g(t, ·) +τ θs  ds dt +S γ (f0 ) .
V
f0 ∈X 0 0
t→θt ∈L2 ([0,T ],V )
(40)
Note that evaluating Wθt (f0 ) requires solving the ODE in (34), so (40) is an ODE
constrained optimization problem.
The temporal regularizer Tτ (t, ·) : V → R in (17) is given by
 t  2
Tτ (t, θ ) := τ θs  ds for fixed τ > 0,
V
0

and S γ : X → R is the spatial regularizer (typically is of Tikhonov type). In


Fig. 4, we show results from Chen et al. (2019) on using (40) for spatiotemporal
reconstruction in tomography.
We conclude by pointing out that the model in (40) can also be stated as PDE
constrained optimal control problem as shown in Chen et al. (2019, Theorem
3.5) (see also Lang et al. 2019b). If θt = ν(t, ·) ∈ V for some velocity field
ν ∈ L2 ([0, T ], V ), then (40) where the deformation operator in (39) is given by
20 A. Hauptmann et al.

Fig. 3 (continued)
Image Reconstruction in Dynamic Inverse Problems 21

the geometric group action in (31) is equivalent to


    
T    t  2
min L A t f (t, ·) , g(t, ·) + τ θs  ds dt + S γ (f0 )
f0 ∈X V
0 0
t→θt ∈V
 
s.t. ∂t f (t, ·) + ∇f (t, ·), θt Rn = 0.
f (0, ·) = f0 .

In a similar manner, if the group action is the mass-preserving as in (32), then (40)
becomes
 T    t  
   2
min L A t f (t, ·) , g(t, ·) + τ  
θ2 ds dt + S γ (f0 )
f0 ∈X V
0 0
t→θt ∈V
 
s.t. ∂t f (t, ·) + ∇ · f (t, ·) θt = 0.
f (0, ·) = f0

This establishes the connection between ODE-based approaches discussed in this


section and PDE-based approaches that are discussed in section “Motion Models
Based on Partial Differential Equations”. As such, it illustrates how one can switch
between a reconstruction method based on deformable templates and one based on
a motion model (Remark 3).

Data-Driven Approaches

The variational approaches outlined in section “Reconstruction Without Explicit


Temporal Models” to “Reconstruction Using a Deformable Template”come with
two serious drawbacks that limit their applicability. First, they typically result in
complex non-convex optimization problems that are difficult to solve reasonably
fast in time-critical applications. Second, they rely on a handcrafted family of
parametrized temporal models that need to be computationally feasible yet are
expressive enough to represent relevant temporal evolution.
Data-driven models, and especially those based on deep learning, offer means
to address these drawbacks. Once trained, a deep learning model is typically very
fast to apply. Next, its large model capacity also allows for capturing complicated


Fig. 3 Spatiotemporal reconstruction using metamorphosis. Top row shows the target image we
seek to recover at 5 (out of 20) selected time points in [0, 1]. Second row shows corresponding
gated tomographic data. Third row shows the reconstruction of the target at these time points
obtained from (37). Fourth and fifth rows show the corresponding shape and photometric
trajectories. Bottom row shows reconstructions assuming a stationary target
22 A. Hauptmann et al.

Fig. 4 Spatiotemporal reconstruction using LDDMM from gated tomographic data of a heart
phantom obtained by solving (40). The heart phantom is a 120×120 pixel image with gray values in
[0, 1] that is taken from Grenander and Miller (2007). Data is gated 2D parallel beam tomography
where the i:th gate has 20 evenly distributed directions in [(i − 1)π/5, π + (i − 1)π/5]. Data (not
shown) also has additive Gaussian white noise corresponding to a noise level of about 14.9dB.
Bottom row compares outcome at an enlarged region of interest (ROI). The ground truth (bottom
leftmost image) is compared against LDDMM reconstruction (second image from left) and TV
reconstruction (third image from left). The latter is computed assuming a stationary spatiotemporal
target, and corresponding full image is also shown (bottom rightmost). It is clear that the cardiac
wall is better resolved using a spatiotemporal reconstruction method. This is essential in CT
imaging in coronary artery disease

temporal evolution that is otherwise difficult to account for in handcrafted models.


Embedding a deep learning model into a spatiotemporal reconstruction method is
however far from straightforward.
Section “Data-Driven Reconstruction Without Temporal Modelling” outlines
how to do this in the context of the reconstruction method in section “Recon-
struction Without Explicit Temporal Models”. The situation is more complicated
for reconstruction methods that use explicit temporal models. These methods
rely on joint optimization of the image and the temporal model, so the latter
needs to be parametrized. Embedding a deep learning-based temporal model is
therefore only feasible if the said parametrization is preserved and most existing
deep learning approaches for temporal modelling of images do not fulfil this
Image Reconstruction in Dynamic Inverse Problems 23

requirement. Section “Learning Deformation Operators” surveys selected deep


learning models for deformations that can be embedded into reconstruction methods
that use a deformable template (section “Reconstruction Using a Deformable
Template”). Finally, section “Learning Motion Models” considers embedding deep
learning-based models into reconstruction methods that use motion models (sec-
tion “Reconstruction Using a Motion Model”).

Data-Driven Reconstruction Without Temporal Modelling

A data-driven approach for solving (1) starts by considering a family {R ϑ }ϑ∈X of


reconstruction operators R ϑ (t, ·) : Y → X. In deep learning, R ϑ is represented
by a deep neural network with network parameters ϑ. The learning amounts to
finding the reconstruction operator R ϑ (t, ·) : Y → X where ϑ ∈ X is learned from
(supervised) training data as

N 
  
T  
ϑ ∈ arg min L(ϑ) where L(ϑ) := X R ϑ t, gi (t, ·) , fi (t, ·) dt.
ϑ∈X i=1 0
(41)
Here, X : X × X → R quantifies goodness-of-fit of images, and t → gi (t, ·) ∈ Y
and t → fi (t, ·) ∈ X for i = 1, . . . , N represent noisy data and corresponding truth
of spatiotemporal image, i.e.,


t → (fi (t, ·), gi (t, ·) ∈ X × Y satisfy (1) for i = 1, . . . , N. (42)

A key component is to specify the appropriate (deep) neural network architecture


for R ϑ (t, ·) : Y → X. One option is to set R ϑ := Pϑ ◦ A † where A † (t, ·) : Y →
X is a (non-learned) reconstruction operator for solving (1) and Pϑ (t, ·) : X → X is
a data-driven post-processing operator (Hauptmann et al. 2019; Kofler et al. 2019).
Hence, the input to the data-driven part is a spatiotemporal image, and the output
is an “improved” spatiotemporal image. Such a model is trained against supervised
data consisting of pairs of spatiotemporal images, one representing ground truth and
the other the output from said reconstruction method. Alternatively, one can learn
updates in an unrolled iterative scheme that is derived from some fixed point-scheme
for solving (4) as in Schlemper et al. (2017). This includes a handcrafted forward
operator, which in Schlemper et al. (2017) is time independent (Fourier transform),
but its sampling in M depends on time. Such an approach needs supervised training
data of the form (42) for its training.
Common for both approaches is that the neural network architecture does not
make use of any explicit deformation/motion model. As such, they represent data-
driven variants of methods outlined in section “Reconstruction Without Explicit
Temporal Models”.
24 A. Hauptmann et al.

Learning Deformation Operators

The focus here is on using a deep learning model in a reconstruction method


that uses a deformable template (section “Reconstruction Using a Deformable
Template”). One possibility is to use deep learning to model the time evolution
t → θt of the deformation parameter, which is the approach (deep diffeomorphic
normalizing flow) taken in Salman et al. (2018). Another option is to use possibility
in defining the parametrized deformation operator Wθt : X → X in (15). Our
emphasis is on the latter, which essentially amounts to considering deep learning
approaches for image registration.
There is a rich theory of variational approaches to image registration (see the
books Grenander and Miller 2007, Younes 2019 and surveys in Pennec et al.
2020 and Kushnarev et al. 2020). The common trait with these approaches is that
deformation models are parametrized. A variational problem is then formulated
to select the “best” deformation by regularizing the deformation itself to avoid
overfitting while ensuring adequate match between the template and target images.
Recently, there are also many publications that consider deep learning for image
registration (see Shen et al. 2017, Litjens et al. 2017, Fu et al. 2019, and Haskins
et al. 2020 for surveys). Most of these learn a deformation operator directly
from pairs of template and target images without accounting for any specific
parametrization, i.e., the learned deformation operator is not parametrized by a
deformation parameter.1
A key aspect is that the trained deep neural network is parametrized explicitly
with a (deformation) parameter, and it does not require retraining when the (defor-
mation) parameter changes. Such a data-driven model can be used in reconstruction
with deformable templates as shown in Liu et al. (2019) and Pouchol et al. (2019)
for the case when data is time discretized. Both these approaches start out by stating
a variational model of the type (17), which is then solved using an intertwined
approach of the type (22). Here one considers diffeomorphic deformations as
defined by the LDDMM framework, i.e., deformation operators are parametrized as
in (47). A key part is the usage of deep learning-based deformation operators that are
of the same form, i.e., the trained deep neural network retains the parametrization in
(39). In the following, our emphasis is on deep learning models for registration that
adhere to a specific predefined parametrization. Stated more precisely, one seeks to
use a data-driven model for this deformation operator that belongs to a predefined
parametrized family {W θ }θ∈Θ .
One way to achieve the above is by learning a mapping Λϑ : X × X → Θ that
predicts the deformation parameter necessary for deforming a template to a target
as

θ := Λϑ (f0 , I ) ⇒ W θ (f0 ) ≈ I for f0 , I ∈ X.

Note here that ϑ ∈ X is the deep neural network parameter that is set during training.
It is not the same as the deformation parameter θ ∈ Θ, which parametrizes the
Image Reconstruction in Dynamic Inverse Problems 25

deformation operator Wθ : X → X and which is a control variable in the variational


approaches for reconstruction. In some sense, Λϑ can be seen as a generative model
for the deformation parameter.
The mapping Λϑ : X × X → Θ can be trained in an unsupervised setting given
access to sufficient amount of training data of the form

(I i , f0i ) ∈ X × X for i = 1, . . . , N (43)

by computing ϑ ∈ X as


N  
ϑ ∈ arg min L(ϑ) where L(ϑ) := X W Λϑ (f i ,I i ) (f0i ), I i . (44)
0
ϑ∈X i=1

Here, X : X × X → R is a distance notion between images, e.g., the squared


L2 -norm if X = L2 (Ω). One can also add an additional regularization term to (44)
that measures registration accuracy in the image space X.

Remark 5. One can also train Λϑ : X × X → Θ in an supervised setting assuming


access to training data of the form

(I i , f0i , θ i ) ∈ X × X × Θ where I i ≈ W θ i (f0i ) for i = 1, . . . , N. (45)

The network parameter ϑ ∈ X is trained against the supervised data in (45) by


computing ϑ ∈ X as


N
 
ϑ ∈ arg min L(ϑ) where L(ϑ) := Θ Λϑ (f0i , I i ), θ i (46)
ϑ∈X i=1

Here, Θ : Θ × Θ → R is a distance notion between deformation parameters, so Θ


must have a metric space structure. Hence, the registration accuracy is measured in
the deformation parameter set Θ.

An example of this approach is Quicksilver (Yang et al. 2017), which considers


deformation operators {Wθ }θ given by the LDDMM framework. Then, θ := ν(1, ·)
for some velocity field ν : [0, 1] × Ω → Rd and

ν ν
W θ (f0 ) := φ0,1 .f0 with φ0,1 ∈ GV as in (34), (47)

and the group action is typically geometric (31) or mass-preserving (32). It is known
that the vector field θ ∈ Θ that registers a template to a target can be computed by
geodesic shooting (see Miller et al. 2006 and Younes 2019, Section 10.6.4). The
registration problem, which is to find θ , thus reduces to finding the initial momenta.
Quicksilver (Yang et al. 2017) trains a deep neural network in the unsupervised
26 A. Hauptmann et al.

setting (as in (44)) to learn these initial momenta. The network architecture for
Λϑ : X × X → Θ is of convolutional neural network (CNN) type with an encoder
and a decoder. The encoder acts as a feature extraction for both template and target
images. The extracted features are then concatenated and fed into the decoder, which
consists of three independent convolutional networks that predict the momenta for
the three dimensions. To recover from prediction errors, correction networks with
the same architecture are used for predicting the prediction error. Training such a
deep neural network model with entire images is challenging, so Quicksilver only
uses patches of images as input. In this way, relatively few images and ground-truth
momenta result in a large amount of training data. A drawback is that the patches
are extracted from the target, and template and deformation are on the same spatial
grid locations, so the deformed patch in the target is assumed to lie (predominantly)
in the same location as the one in the template image. This assumes the deformation
is relatively small.
Another similar approach is VoxelMorph (Balakrishnan et al. 2019) where
training is performed in an unsupervised manner (as in (44)) with only pairs of
template and morphed image. The output is the displacement field θ ∈ Θ necessary
to register a template against a target, e.g., using an LDDMM-based deformation
operator. VoxelMorph uses CNN architecture similar to U-net for Λϑ : X × X →
Θ that consists of encoder and decoder sections with skip connections. The
unsupervised loss (44) can be complemented by an auxiliary loss that leverages
anatomical segmentations at training time. The trained network can also provide
the registered image, i.e., it offers a deep learning-based registration operator. A
further development of VoxelMorph is FAIM (Kuang and Schmah 2018) that has
fewer trainable parameters (i.e., dimension of ϑ in FIAM is smaller than the one in
VoxelMorph). Authors also claim that FAIM achieves higher registration accuracy
than VoxelMorph, e.g., it produces deformations with many fewer “foldings,” i.e.,
regions of non-invertibility where the surface folds over itself.
One may also learn the spatially adaptive regularizer that is used for defining the
deformation operator (Niethammer et al. 2019). See also Mussabayeva et al. (2019)
for a closely related approach where one learns the regularizer in the LDDMM
framework, which is the Riemannian metric for the group GV in (35).
The above approaches all avoid learning the entire deformation; instead, they
learn a deformation that belongs to a specific class of deformation models. This
makes it possible to embed the learned deformation model in a variational model
for image reconstruction.

Learning Motion Models

The methods mentioned here deals with using deep learning in reconstruction with
a motion model (section “Reconstruction Using a Motion Model”). Many of the
motion models are however sufficient for capturing the desired motion, so the main
motivation with introducing deep learning is to speed up these methods.
Image Reconstruction in Dynamic Inverse Problems 27

In particular, the above means we still aim to solve the penalized variational
formulation (28) with an explicit temporal model, such as the continuity equation
(24). The network then essentially learns to produce the motion field ν(t, ·) from
the time series f (t, ·). Such a network can then be utilized to estimate the motion
field, instead of solving the corresponding subproblem (30) in the alternating
minimization. For instance, one could use neural networks that are designed to
compute the optical flow (Dosovitskiy et al. 2015; Ilg et al. 2017).
Another possibility is to account for the explicit structure of the PDE by using
networks that aim to find a PDE representation for given data (Long et al. 2019).
Alternatively, one may build network architectures based on the discretization of
the underlying equations as motivated in Arridge and Hauptmann (2020). Finally,
similar to the work of joint motion estimation and reconstruction, one can learn a
motion map that is used in a learned reconstructions scheme (Qin et al. 2018).

Outlook and Conclusions

The variational approaches outlined in sections “Reconstruction Using a Motion


Model” and “Reconstruction Using a Deformable Template”, and then in more
detail in sections “Deformable Templates Given by Diffeomorphisms” and “Motion
Models Based on Partial Differential Equations”, rely on explicit parametrized
temporal models. These temporal models are given either by deformation operators
with time-dependent parameters (section “Reconstruction Using a Deformable
Template”) or through a motion model (section “Reconstruction Using a Motion
Model”). Powerful techniques from analysis and differential geometry can be used
to characterize regularizing properties of these reconstruction methods. They also
provide state-of-the-art results when applied to challenging tomographic data that
is highly noisy and/or incomplete. The methods are however difficult to use due to
the computational burden and the sheer number of (regularization) parameters that
needs to be choosen.
Data-driven temporal modelling offers a way to address the computational
burden inherent in the variational approaches. Here, it is clear that deep learning
needs to be embedded in such a way that the resulting learned temporal model
is parametrized. VoxelMorph (Balakrishnan et al. 2019) and Quicksilver (Yang
et al. 2017) are examples of how this can be done in the context of diffeomorphic
deformation, and Liu et al. (2019) and Pouchol et al. (2019) show how such
learned models can be used in reconstruction. In the near future, we expect more
development along these lines. Finding appropriate training data however remains
a key difficulty in data-driven approaches as in most dynamic imaging scenarios,
there is no underlying ground-truth data available. Thus, most likely one will
need to resort to simulations for training these models. Possibly, one could utilize
reconstructions generated by variational approaches from experimental data as gold-
standard reference reconstructions for a training procedure. In conclusion, there is a
great need for dynamic digital phantoms that include both natural image and motion
features that can serve as input for simulators.
28 NOTE

A final challenge that applies to all reconstruction methods in dynamic inverse


problems is to formulate relevant validation and comparison protocols.

Note
1 The temporal model is defined by considering a time-dependent deformation parameter. The

deep neural network representing the deformation operator also has parameters, but these are not
the same as the deformation parameter. In particular, the network parameters are set during training.
In contrast, the deformation parameter varies with time.

References
Arguillere, S., Trélat, E., Trouvé, A., Younes, L.: Shape deformation analysis from the optimal
control viewpoint. Journal de Mathématiques Pures et Appliqués 104(1), 139–178 (2015)
Arridge, S., Hauptmann, A.: Networks for nonlinear diffusion problems in imaging. J. Math. Imag.
Vis. 62(3), 471–487 (2020). https://doi.org/10.1007/s10851-019-00901-3
Aviles-Rivero, A.I., Williams, G., Graves, M.J., Schönlieb, C.B.: Compressed sensing plus motion
(CS+M): a new perspective for improving undersampled mr image reconstruction. ArXiv
preprint 1810.10828 (2018)
Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: VoxelMorph: a learning
framework for deformable medical image registration. IEEE Trans. Med. Imag. 38(8), 1788–
1800 (2019)
Beg, F.M., Miller, M.I., Trouvé, A., Younes, L.: Computing large deformation metric mappings via
geodesic flow of diffeomorphisms. Int. J. Comput. Vis. 61(2), 139—157 (2005)
Benning, M., Burger, M.: Modern regularization methods for inverse problems. Acta Numer. 27,
1–111 (2018)
Bertero, M., Lantéri, H., Zanni, L.: Iterative image reconstruction: a point of view. In: Censor,
Y., Jiang, M., Louis, A.K. (eds.) Interdisciplinary Workshop on Mathematical Methods in
Biomedical Imaging and Intensity-Modulated Radiation (IMRT), Pisa, pp. 37–63 (2008)
Bubba, T.A., März, M., Purisha, Z., Lassas, M., Siltanen, S.: Shearlet-based regularization in sparse
dynamic tomography. In: Wavelets and Sparsity XVII, vol. 10394, p. 103940Y. International
Society for Optics and Photonics, Bellinghams (2017)
Burger, M., Dirks, H., Frerking, L., Hauptmann, A., Helin, T., Siltanen, S.: A variational
reconstruction method for undersampled dynamic x-ray tomography based on physical motion
models. Inverse Probl. 33(12), 124008 (2017)
Burger, M., Dirks, H., Schönlieb, C.B.: A variational model for joint motion estimation and image
reconstruction. SIAM J. Imag. Sci. 11(1), 94–128 (2018)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications
to imaging. J. Math. Imag. Vis. 40(1), 120–145 (2011)
Charon, N., Charlier, B., Trouvé, A.: Metamorphoses of functional shapes in Sobolev spaces.
Found. Comput. Math. 18(6), 1535–1596 (2018). https://doi.org/10.1007/s10208-018-9374-3
Chen, C., Öktem, O.: Indirect image registration with large diffeomorphic deformations. SIAM J.
Imag. Sci. 11(1), 575–617 (2018)
Chen, B., Abascal, J., Soleimani, M.: Extended joint sparsity reconstruction for spatial and
temporal ERT imaging. Sensors 18(11), 4014 (2018)
Chen, C., Gris, B., Öktem, O.: A new variational model for joint image reconstruction and motion
estimation in spatiotemporal imaging. SIAM J. Imag. Sci. 12(4), 1686–1719 (2019)
De Schryver, T., Dierick, M., Heyndrickx, M., Van Stappen, J., Boone, M.A., Van Hoorebeke, L.,
Boone, M.N.: Motion compensated micro-CT reconstruction for in-situ analysis of dynamic
processes. Sci. Rep. 8, 7655 (10pp) (2018)
NOTE 29

Dirks, H.: Variational methods for joint motion estimation and image reconstruction. Phd thesis,
Institute for Computational and Applied Mathematics, University of Münster (2015)
Djurabekova, N., Goldberg, A., Hauptmann, A., Hawkes, D., Long, G., Lucka, F., Betcke,
M.: Application of proximal alternating linearized minimization (PALM) and inertial PALM
to dynamic 3D CT. In: 15th International Meeting on Fully Three-Dimensional Image
Reconstruction in Radiology and Nuclear Medicine, vol. 11072, p. 1107208. International
Society for Optics and Photonics, Bellingham (2019)
Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P.,
Cremers, D., Brox, T.: Flownet: learning optical flow with convolutional networks. In: IEEE
International Conference on Computer Vision, pp. 2758–2766 (2015)
Feng, L., Grimm, R., Block, K.T., Chandarana, H., Kim, S., Xu, J., Axel, L., Sodickson, D.K.,
Otazo, R.: Golden-angle radial sparse parallel MRI: combination of compressed sensing,
parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric
MRI. Magn. Reson. Med. 72(3), 707–717 (2014)
Frerking, L.: Variational methods for direct and indirect tracking in dynamic imaging. Phd thesis,
Institute for Computational and Applied Mathematics, University of Münsternster (2016)
Fu, Y., Lei, Y., Wang, T., Curran, W.J., Liu, T., Yang, X.: Deep learning in medical image
registration: a review. ArXiv preprint 1912.12318 (2019)
Glover, G.H.: Overview of functional magnetic resonance imaging. Neurosurg. Clin. 22(2), 133–
139 (2011)
Grasmair, M.: Generalized Bregman distances and convergence rates for non-convex regularization
methods. Inverse Probl. 26(11), 115014 (2010)
Grenander, U., Miller, M.: Pattern Theory. From Representation to Inference. Oxford University
Press, Oxford (2007)
Gris, B., Chen, C., Öktem, O.: Image reconstruction through metamorphosis. Inverse Probl. 36(2),
025001 (27pp) (2020)
Hakkarainen, J., Purisha, Z., Solonen, A., Siltanen, S.: Undersampled dynamic x-ray tomography
with dimension reduction kalman filter. IEEE Trans. Comput. Imag. 5(3), 492–501 (2019).
https://doi.org/10.1109/TCI.2019.2896527
Haskins G., Kruger, U., Yan, P.: Deep learning in medical image registration: a survey. Mach. Vis.
Appl. 31(8) (2020)
Hauptmann, A., Arridge, S., Lucka, F., Muthurangu, V., Steeden, S.A.: Real-time cardiovascular
mr with spatio-temporal artifact suppression using deep learning–proof of concept in congenital
heart disease. Magn. Reson. Med. 81(2), 1143–1156 (2019)
Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)
Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: Flownet 2.0: evolution of
optical flow estimation with deep networks. In: IEEE Conference on Computer Vision and
Pattern Recognition, pp. 2462–2470 (2017)
Kofler, A., Dewey, M., Schaeffter, T., Wald, C., Kolbitsch, C.: Spatio-temporal deep learning-based
undersampling artefact reduction for 2D radial cine MRI with limited training data. IEEE Trans.
Med. Imag. 39(3), 703–717 (2019). https://doi.org/10.1109/TMI.2019.2930318
Kuang, D., Schmah, T.: FAIM – a ConvNet method for unsupervised 3D medical image
registration. ArXiv preprint 1811.09243 (2018)
Kushnarev, S., Qiu, A., Younes, L. (eds.): Mathematics of Shapes and Applications. World
Scientific, Singapore (2020)
Kwong, Y., Mel, A.O., Wheeler, G., Troupis, J.M.: Four-dimensional computed tomography
(4DCT): a review of the current status and applications. J. Med. Imag. Radiat. Oncol. 59(5),
545–554 (2015)
Lang, L.F., Dutta, N., Scarpa, E., Sanson, B., Schönlieb, C.B., Étienne, J.: Joint motion estimation
and source identification using convective regularisation with an application to the analysis of
laser nanoablations. bioRxiv 686261 (2019a)
Lang, L.F., Neumayer, S., Öktem, O., Schönlieb, C.B.: Template-based image reconstruction from
sparse tomographic data. Appl. Math. Optim. (2019b). https://doi.org/10.1007/s00245-019-
09573-2
30 NOTE

Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., van der Laak,
J.A.W.M., van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image
analysis. Med. Image Anal. 42, 60–88 (2017)
Liu, J., Aviles-Rivero, A.I., Ji, H., Schönlieb, C.B.: Rethinking medical image reconstruction via
shape prior, going deeper and faster: deep joint indirect registration and reconstruction. To
appear in Medical Image Analysis, preprint on arxiv 1912.07648 (2019)
Long, Z., Lu, Y., Dong, B.: Pde-net 2.0: learning pdes from data with a numeric-symbolic hybrid
deep network. J. Comput. Phys. 399, 108925 (2019)
Lucka, F., Huynh, N., Betcke, M., Zhang, E., Beard, P., Cox, B., Arridge, S.: Enhancing
compressed sensing 4D photoacoustic tomography by simultaneous motion estimation. SIAM
J. Imag. Sci. 11(4), 2224–2253 (2018)
Lustig, M., Santos, J.M., Donoho, D.L., Pauly, J.M.: kt SPARSE: high frame rate dynamic MRI
exploiting spatio-temporal sparsity. In: 13th Annual Meeting of ISMRM, Seattle, vol. 2420
(2006)
Miller, M.I., Trouvé, A., Younes, L.: Geodesic shooting for computational anatomy. J. Math. Imag.
Vis. 24(2), 209—228 (2006)
Mokso, R., Schwyn, D.A., Walker, S.M., Doube, M., Wicklein, M., Müller, T., Stampanoni, M.,
Taylor, G.K., Krapp, H.G.: Four-dimensional in vivo x-ray microscopy with projection-guided
gating. Sci. Rep. 5, 8727 (6pp) (2014)
Mussabayeva, A., Pisov, M., Kurmukov, A., Kroshnin, A., Denisova, Y., Shen, L., Cong, S., Wang,
L., Gutman, B.: Diffeomorphic metric learning and template optimization for registration-based
predictive models. In: Zhu, D., Yan, J., Huang, H., Shen, L., Thompson, P.M., Westin, C.F.,
Pennec, X., Joshi, S., Nielsen, M., Fletcher, T., Durrleman, S., Sommer, S. (eds.) Multimodal
Brain Image Analysis and Mathematical Foundations of Computational Anatomy (MBIA
2019/MFCA 2019). Lecture Notes in Computer Science, vol. 11846, pp. 151–161. Springer
Nature Switzerland, Cham (2019)
Niemi, E., Lassas, M., Kallonen, A., Harhanen, L., Hämäläinen, K., Siltanen, S.: Dynamic multi-
source x-ray tomography using a spacetime level set method. J. Comput. Phys. 291, 218–237
(2015)
Niethammer, M., Kwitt, R., Vialard, F.X.: Metric learning for image registration. In: Computer
Vision and Pattern Recognition (CVPR 2019) (2019)
Pennec, X., Sommer, S., Fletcher, T. (eds.): Riemannian Geometric Statistics in Medical Image
Analysis. Academic Press, Cambridge (2020)
Pouchol, C., Verdier, O., Öktem, O.: Spatiotemporal PET reconstruction using ML-EM with
learned diffeomorphic deformation. In: Knoll, F., Maier, A., Rueckert, D., Ye, J.C. (eds.)
Machine Learning for Medical Image Reconstruction. Second International Workshop, MLMIR
2019, Held in Conjunction with MICCAI 2019. Lecture Notes in Computer Science, vol. 11905,
pp. 151–162. Springer (2019). Selected for oral presentation
Qin, C., Bai, W., Schlemper, J., Petersen, S.E., Piechnik, S.K., Neubauer, S., Rueckert, D.:
Joint learning of motion estimation and segmentation for cardiac mr image sequences. In:
International Conference on Medical Image Computing and Computer-Assisted Intervention,
pp. 472–480. Springer (2018)
Rahmim, A., Lodge, M.A., Karakatsanis, N.A., Panin, V.Y., Zhou, Y., McMillan, A., Cho, S.,
Zaidi, H., Casey, M.E., Wahl, R.L.: Dynamic whole-body PET imaging: principles, potentials
and applications. Eur. J. Nucl. Med. Mol. Imag. 46, 501–518 (2019)
Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys.
D: Nonlinear Phenom. 60(1–4), 259–268 (1992)
Ruhlandt, A., Töpperwien, M., Krenkel, M., Mokso, R., Salditt, T.: Four dimensional material
movies: high speed phase-contrast tomography by backprojection along dynamically curved
paths. Sci. Rep. 7, 6487 (9pp) (2017)
Salman, H., Yadollahpour, P., Fletcher, T., Batmanghelich, K.: Deep diffeomorphic normalizing
flows. ArXiv preprint 1810.03256 (2018)
Scherzer, O., Grasmair, M., Grossauer, H., Haltmeier, M., Lenzen, F.: Variational Methods in
Imaging. Applied Mathematical Sciences, vol. 167. Springer, New York (2009)
NOTE 31

Schlemper, J., Caballero, J., Hajnal, J.V., Price, A.N., Rueckert, D.: A deep cascade of convolu-
tional neural networks for dynamic mr image reconstruction. IEEE Trans. Med. Imag. 37(2),
491–503 (2017)
Schmitt, U., Louis, A.K.: Efficient algorithms for the regularization of dynamic inverse problems:
I. Theory. Inverse Probl. 18(3), 645 (2002)
Schmitt, U., Louis, A.K., Wolters, C., Vauhkonen, M.: Efficient algorithms for the regularization
of dynamic inverse problems: II. Applications. Inverse Probl. 18(3), 659 (2002)
Shen, D., Wu, G., Suk, H.I.: Deep learning in medical image analysis. Ann. Rev. Biomed. Eng.
19, 221–248 (2017)
Steeden, J.A., Kowalik, G.T., Tann, O., Hughes, M., Mortensen, K.H., Muthurangu, V.: Real-
time assessment of right and left ventricular volumes and function in children using high
spatiotemporal resolution spiral bssfp with compressed sensing. J. Cardiovasc. Magn. Reson.
20(1), 79 (2018)
Trouvé, A., Younes, L.: Metamorphoses through Lie group action. Found. Comput. Math. 5(2),
173–198 (2005)
Trouvé, A., Younes, L.: Shape spaces. In: Otmar, S. (ed.) Handbook of Mathematical Methods in
Imaging, pp. 1759–1817. Springer, New York (2015)
Yang, G., Hipwell, J.H., Hawkes, D.J., Arridge, S.R.: Numerical methods for coupled reconstruc-
tion and registration in digital breast tomosynthesis. Ann. Br. Mach. Vis. Assoc. 2013(9), 1–38
(2013)
Yang, X., Kwitt, R., Styner, M., Niethammer, M.: Quicksilver: fast predictive image registration–a
deep learning approach. NeuroImage 158, 378–396 (2017)
Younes, L.: Shapes and Diffeomorphisms. Applied Mathematical Sciences, vol. 171, 2nd edn.
Springer, Heidelberg (2019)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy