Abstract
We introduce a novel regulariser based on the natural vector field operations gradient, divergence, curl and shear. For suitable choices of the weighting parameters contained in our model, it generalises well-known first- and second-order TV-type regularisation methods including TV, ICTV and TGV\(^2\) and enables interpolation between them. To better understand the influence of each parameter, we characterise the nullspaces of the respective regularisation functionals. Analysing the continuous model, we conclude that it is not sufficient to combine penalisation of the divergence and the curl to achieve high-quality results, but interestingly it seems crucial that the penalty functional includes at least one component of the shear or suitable boundary conditions. We investigate which requirements regarding the choice of weighting parameters yield a rotational invariant approach. To guarantee physically meaningful reconstructions, implying that conservation laws for vectorial differential operators remain valid, we need a careful discretisation that we therefore discuss in detail.















Similar content being viewed by others
Notes
Image denoising using the unified model in this work: https://github.com/JoanaGrah/VectorOperatorSparsity; image compression using the sparse vector fields model in [13]: https://github.com/JoanaGrah/SparseVectorFields.
References
Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variation and Free Discontinuity Problems, vol. 254. Clarendon Press, Oxford (2000)
Attouch, H., Brezis, H.: Duality for the sum of convex functions in general Banach spaces. In: Barroso, J.A. (eds.) North-Holland Mathematical Library, vol. 34, pp. 125–133. Elsevier (1986)
Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, vol. 147. Springer, Berlin (2006)
Benning, M., Brune, C., Burger, M., Müller, J.: Higher-order TV methods—enhancement via Bregman iteration. J. Sci. Comput. 54(2–3), 269–310 (2013)
Benning, M., Burger, M.: Ground states and singular vectors of convex variational regularization methods. Methods Appl. Anal. 20(4), 295–334 (2013)
Benning, M., Burger, M.: Modern regularization methods for inverse problems. Acta Numerica 27, 1–111 (2018)
Bergounioux, M.: Poincaré-wirtinger inequalities in bounded variation function spaces. Control Cybern. 40, 921–929 (2011)
Braides, A.: Gamma-Convergence for Beginners, vol. 22. Clarendon Press, Oxford (2002)
Bredies, K.: Symmetric tensor fields of bounded deformation. Annali di Matematica Pura ed Applicata 192(5), 815–851 (2013)
Bredies, K., Holler, M.: Regularization of linear inverse problems with total generalized variation. J. Inverse Ill-posed Probl. 22(6), 871–913 (2014)
Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)
Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. Proc. SampTA 201 (2011)
Brinkmann, E.M., Burger, M., Grah, J.: Regularization with sparse vector fields: from image compression to TV-type reconstruction. In: Aujol, J.-F., Nikolova, M., Papadakis, N. (eds.) Scale Space and Variational Methods in Computer Vision, pp. 191–202. Springer (2015)
Brinkmann, E.M., Burger, M., Rasch, J., Sutour, C.: Bias reduction in variational regularization. J. Math. Imaging Vis. 59(3), 1–33 (2017)
Burger, M., Osher, S.: A guide to the TV zoo. In: Burger, M., Osher, S. (eds.) Level Set and PDE Based Reconstruction Methods in Imaging, pp. 1–70. Springer (2013)
Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numerische Mathematik 76(2), 167–188 (1997)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Chan, T.F., Esedoglu, S., Park, F.: A fourth order dual method for staircase reduction in texture extraction and image restoration problems. In: 17th IEEE International Conference on Image Processing (ICIP), 2010, pp. 4137–4140. IEEE (2010)
Dal Maso, G.: An Introduction to \(\Gamma \)-Convergence. Springer, Berlin (2012)
Deledalle, C.A., Papadakis, N., Salmon, J.: On debiasing restoration algorithms: applications to total-variation and nonlocal-means. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 129–141. Springer (2015)
Deledalle, C.A., Papadakis, N., Salmon, J., Vaiter, S.: CLEAR: Covariant least-square refitting with applications to image restoration. SIAM J. Imaging Sci. 10(1), 243–284 (2017)
Ekeland, I., Temam, R.: Convex Analysis and Variational Problems, vol. 28. SIAM, Philadelphia (1999)
Evans, L.C.: Partial Differential Equations. American Mathematical Society, Providence (1998)
Goldstein, T., Li, M., Yuan, X., Esser, E., Baraniuk, R.: Adaptive primal-dual hybrid gradient methods for saddle-point problems (2013). arXiv:1305.0546
Haber, E.: Computational Methods in Geophysical Electromagnetics. SIAM, Philadelphia (2014)
Hyman, J.M., Shashkov, M.: Adjoint operators for the natural discretizations of the divergence, gradient and curl on logically rectangular grids. Appl. Numer. Math. 25(4), 413–442 (1997)
Hyman, J.M., Shashkov, M.: Natural discretizations for the divergence, gradient, and curl on logically rectangular grids. Comput. Math. Appl. 33(4), 81–104 (1997)
Mainberger, M., Bruhn, A., Weickert, J., Forchhammer, S.: Edge-based compression of cartoon-like images with homogeneous diffusion. Pattern Recognit. 44(9), 1859–1873 (2011)
Mainberger, M., Weickert, J.: Edge-based image compression with homogeneous diffusion. In: Jiang, X., Petkov, N. (eds.) Computer Analysis of Images and Patterns, pp. 476–483. Springer (2009)
Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)
Raviart, P.A., Thomas, J.M.: A mixed finite element method for 2-nd order elliptic problems. In: Galligani, I., Magenes, E. (eds.) Mathematical Aspects of Finite Element Methods, pp. 292–315. Springer (1977)
Rockafellar, R.T.: Convex Analysis. Princeton, Princeton University Press (1972)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1), 259–268 (1992)
Scherzer, O.: Denoising with higher order derivatives of bounded variation and an application to parameter estimation. Computing 60(1), 1–27 (1998)
Schnörr, C.: Segmentation of visual motion by minimizing convex non-quadratic functionals. In: 12th International Conference on Pattern Recognition, Jerusalem, Israel (1994), pp. 661–663 (1994)
Yuan, J., Schörr, C., Steidl, G.: Simultaneous higher-order optical flow estimation and decomposition. SIAM J. Sci. Comput. 29(6), 2283–2304 (2007)
Zhang, L., Wu, X., Buades, A., Li, X.: Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. imaging 20(2), 023,016 (2011)
Acknowledgements
The authors thank Kristian Bredies, Martin Holler (both University of Graz) and Christoph Schnörr (University of Heidelberg) for useful discussions and links to literature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work has been supported by ERC via Grant EU FP 7 - ERC Consolidator Grant 615216 LifeInverse. JSG acknowledges support by The Alan Turing Institute under the EPSRC Grant EP/N510129/1 and by the NIHR Cambridge Biomedical Research Centre. The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Variational Methods for Imaging and Vision, where work on this paper was undertaken, supported by EPSRC Grant no EP/K032208/1 and the Simons Foundation.
Appendices
Appendix A: Derivation of Nullspaces
In the following, we aim at characterising the set of all \(u \in L^2(\varOmega )\) for which \(R_{\varvec{\beta }}(u) = 0\) holds.
At first we consider the case \(\beta _2=0\) and \(\beta _3,\beta _4 > 0\). Following the line of argument for the derivation of the nullspaces in Sect. 3, it is clear that in order to be in the nullspace u has to satisfy
Calculation of first- and second-order derivatives of u then yields the following identities for the gradient and the Hessian of u:
and
where
In particular, we observe:
which can only be true if \(\frac{\partial ^2}{\partial x_1^2} U_1(x_1)\) and \(\frac{\partial ^2}{\partial x_2^2} U_2(x_2)\) are equal and constant, i.e. \(\frac{\partial ^2}{\partial x_1^2} U_1(x_1) = \frac{\partial ^2}{\partial x_2^2} U_2(x_2) = c\).
Twofold integration of \(\frac{\partial ^2}{\partial x_1^2} U_1\) respectively \(\frac{\partial ^2}{\partial x_2^2} U_2\) on condition that the former only depends on \(x_1\) while the latter only depends on \(x_2\) yields:
and thus
Consequently the nullspace only consists of functions that are linear combinations of \(x_1^2 + x_2^2, x_1, x_2\) and 1.
We continue with the case \(\beta _3 = 0\) and \(\beta _2,\beta _4 > 0\). By the discussion of the nullspaces in Sect. 3u has to be harmonic, i.e.
and moreover it has to be of the form \(u(x) = U_1(x_1) + U_2(x_2)\). Taking into account the calculations of the first- and second-order partial derivatives in the previous case, we easily see that the above equality is equivalent to
which obviously can only be true if \(\frac{\partial ^2}{\partial x_1^2}U_1(x_1)\) and \(\frac{\partial ^2}{\partial x_2^2}U_2(x_2)\) are constant with constants summing to zero. On this basis, we analogously to the previous case integrate \(\frac{\partial ^2}{\partial x_1^2} U_1\) and \(\frac{\partial ^2}{\partial x_2^2} U_2\) twice on condition that the former only depends on \(x_1\) and the latter only depends on \(x_2\)
and hence
The nullspace thus only consists of functions that are linear combinations of \(x_1^2 - x_2^2, x_1, x_2\) and 1.
Finally, we study the case \(\beta _4 = 0\) and \(\beta _2, \beta _3 > 0\). Analogous to the previous case we argue that by the characterisation of the nullspaces in Sect. 3u is of the form \(u(x) = U(x_1 + x_2) + V(x_1 - x_2)\) and again has to be harmonic, i.e.
Again, we reconsider the first- and second-order partial derivatives from the first case and obtain for all \(x_1,x_2\)
which implies that \(\frac{\partial ^2}{\partial x_1^2}U\) and \(\frac{\partial ^2}{\partial x_2^2}V\) are constant with constants summing to zero. By twofold integration of \(\frac{\partial ^2}{\partial x_1^2}U\) and \(\frac{\partial ^2}{\partial x_2^2}V\) on condition that the former depends on \(x_1 + x_2\) and the latter depends on \(x_1 - x_2\) we thus obtain:
and hence
As a result, the nullspace contains all functions that are linear combinations of \(x_1x_2,x_1,x_2\) and 1.
Appendix B: Proof of Theorem 4
Theorem 4
Let \(\beta _i \ge 0\) for \(i = 1,\dots ,4\) and let \(\beta _3 = \beta _4\). Then the regulariser \(R_{\varvec{\beta }}(u)\) is rotationally invariant, i.e. for an orthonormal rotation matrix \(\varvec{Q} \in \mathbb {R}^{2\times 2}\) with
and for \(u \in BV(\varOmega )\) it holds that \(\check{u} \in BV(\varOmega )\), where \(\check{u}=u\circ \varvec{Q}\), i.e. \(\check{u}(x) = u(\varvec{Q}x)\) for a.e. \(x \in \varOmega \), and
Proof
In order to prove the assertion, we consider \(\check{u}=u \circ \varvec{Q}\) and show that we obtain \(R_{\varvec{\beta }}(\check{u}) = R_{\varvec{\beta }}(u)\), where as before
Inserting \(\check{u}\) in the first term of the regulariser, we realise that we obtain the equivalence to the first term of \(R_{\varvec{\beta }}(u)\) by choosing \(\check{w}=\varvec{Q}^{\top }w \circ \varvec{Q}\), i.e.
since
Thus, if we can show that for \(\check{w}=\varvec{Q}^{\top }w \circ \varvec{Q}\) we also obtain the equivalence of the second term of the regulariser to the second term of \(R_{\varvec{\beta }}(u)\), we have proven the assertion. To this end we set \(v = \varvec{Q}^{\top }w\) and compute
In addition we need the Jacobian matrix \(\nabla v\) of v, where
We can hence obtain the Jacobian matrix \(\nabla \check{w}\) of \(\check{w}\) by computing \(\nabla \check{w} = \varvec{Q}^{\top }\nabla v\) yielding
Based on the Jacobian \(\nabla \check{w}\), we can calculate the curl, the divergence and the two components of the shear for \(\check{w}\):
Next, we consider \(\vert \text {diag}({\varvec{\beta }})\nabla _N \check{w}\vert \), where for the sake of readability, we define
Then we obtain:
We conclude the proof by setting \(\beta _3 = \beta _4\) yielding the equivalence of \(\vert \text {diag}({\varvec{\beta }})\nabla _N \check{w}\vert \) and \(\vert \text {diag}({\varvec{\beta }})\nabla _N w\vert \), which then in turn implies \(R_{\varvec{\beta }}(\check{u})=R_{\varvec{\beta }}(u)\).
\(\square \)
Appendix C: Alternative Visualisations of Parts of Figs. 1, 4 and 5

Rights and permissions
About this article
Cite this article
Brinkmann, EM., Burger, M. & Grah, J.S. Unified Models for Second-Order TV-Type Regularisation in Imaging: A New Perspective Based on Vector Operators. J Math Imaging Vis 61, 571–601 (2019). https://doi.org/10.1007/s10851-018-0861-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-018-0861-6