Skip to main content
Log in

Unified Models for Second-Order TV-Type Regularisation in Imaging: A New Perspective Based on Vector Operators

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

We introduce a novel regulariser based on the natural vector field operations gradient, divergence, curl and shear. For suitable choices of the weighting parameters contained in our model, it generalises well-known first- and second-order TV-type regularisation methods including TV, ICTV and TGV\(^2\) and enables interpolation between them. To better understand the influence of each parameter, we characterise the nullspaces of the respective regularisation functionals. Analysing the continuous model, we conclude that it is not sufficient to combine penalisation of the divergence and the curl to achieve high-quality results, but interestingly it seems crucial that the penalty functional includes at least one component of the shear or suitable boundary conditions. We investigate which requirements regarding the choice of weighting parameters yield a rotational invariant approach. To guarantee physically meaningful reconstructions, implying that conservation laws for vectorial differential operators remain valid, we need a careful discretisation that we therefore discuss in detail.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. http://r0k.us/graphics/kodak/.

  2. Image denoising using the unified model in this work: https://github.com/JoanaGrah/VectorOperatorSparsity; image compression using the sparse vector fields model in [13]: https://github.com/JoanaGrah/SparseVectorFields.

References

  1. Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variation and Free Discontinuity Problems, vol. 254. Clarendon Press, Oxford (2000)

    MATH  Google Scholar 

  2. Attouch, H., Brezis, H.: Duality for the sum of convex functions in general Banach spaces. In: Barroso, J.A. (eds.) North-Holland Mathematical Library, vol. 34, pp. 125–133. Elsevier (1986)

  3. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, vol. 147. Springer, Berlin (2006)

    MATH  Google Scholar 

  4. Benning, M., Brune, C., Burger, M., Müller, J.: Higher-order TV methods—enhancement via Bregman iteration. J. Sci. Comput. 54(2–3), 269–310 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Benning, M., Burger, M.: Ground states and singular vectors of convex variational regularization methods. Methods Appl. Anal. 20(4), 295–334 (2013)

    MathSciNet  MATH  Google Scholar 

  6. Benning, M., Burger, M.: Modern regularization methods for inverse problems. Acta Numerica 27, 1–111 (2018)

    Article  MathSciNet  Google Scholar 

  7. Bergounioux, M.: Poincaré-wirtinger inequalities in bounded variation function spaces. Control Cybern. 40, 921–929 (2011)

    MATH  Google Scholar 

  8. Braides, A.: Gamma-Convergence for Beginners, vol. 22. Clarendon Press, Oxford (2002)

    Book  MATH  Google Scholar 

  9. Bredies, K.: Symmetric tensor fields of bounded deformation. Annali di Matematica Pura ed Applicata 192(5), 815–851 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bredies, K., Holler, M.: Regularization of linear inverse problems with total generalized variation. J. Inverse Ill-posed Probl. 22(6), 871–913 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bredies, K., Valkonen, T.: Inverse problems with second-order total generalized variation constraints. Proc. SampTA 201 (2011)

  13. Brinkmann, E.M., Burger, M., Grah, J.: Regularization with sparse vector fields: from image compression to TV-type reconstruction. In: Aujol, J.-F., Nikolova, M., Papadakis, N. (eds.) Scale Space and Variational Methods in Computer Vision, pp. 191–202. Springer (2015)

  14. Brinkmann, E.M., Burger, M., Rasch, J., Sutour, C.: Bias reduction in variational regularization. J. Math. Imaging Vis. 59(3), 1–33 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Burger, M., Osher, S.: A guide to the TV zoo. In: Burger, M., Osher, S. (eds.) Level Set and PDE Based Reconstruction Methods in Imaging, pp. 1–70. Springer (2013)

  16. Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numerische Mathematik 76(2), 167–188 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chan, T.F., Esedoglu, S., Park, F.: A fourth order dual method for staircase reduction in texture extraction and image restoration problems. In: 17th IEEE International Conference on Image Processing (ICIP), 2010, pp. 4137–4140. IEEE (2010)

  19. Dal Maso, G.: An Introduction to \(\Gamma \)-Convergence. Springer, Berlin (2012)

    Google Scholar 

  20. Deledalle, C.A., Papadakis, N., Salmon, J.: On debiasing restoration algorithms: applications to total-variation and nonlocal-means. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 129–141. Springer (2015)

  21. Deledalle, C.A., Papadakis, N., Salmon, J., Vaiter, S.: CLEAR: Covariant least-square refitting with applications to image restoration. SIAM J. Imaging Sci. 10(1), 243–284 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Ekeland, I., Temam, R.: Convex Analysis and Variational Problems, vol. 28. SIAM, Philadelphia (1999)

    Book  MATH  Google Scholar 

  23. Evans, L.C.: Partial Differential Equations. American Mathematical Society, Providence (1998)

    MATH  Google Scholar 

  24. Goldstein, T., Li, M., Yuan, X., Esser, E., Baraniuk, R.: Adaptive primal-dual hybrid gradient methods for saddle-point problems (2013). arXiv:1305.0546

  25. Haber, E.: Computational Methods in Geophysical Electromagnetics. SIAM, Philadelphia (2014)

    Book  MATH  Google Scholar 

  26. Hyman, J.M., Shashkov, M.: Adjoint operators for the natural discretizations of the divergence, gradient and curl on logically rectangular grids. Appl. Numer. Math. 25(4), 413–442 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  27. Hyman, J.M., Shashkov, M.: Natural discretizations for the divergence, gradient, and curl on logically rectangular grids. Comput. Math. Appl. 33(4), 81–104 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  28. Mainberger, M., Bruhn, A., Weickert, J., Forchhammer, S.: Edge-based compression of cartoon-like images with homogeneous diffusion. Pattern Recognit. 44(9), 1859–1873 (2011)

    Article  Google Scholar 

  29. Mainberger, M., Weickert, J.: Edge-based image compression with homogeneous diffusion. In: Jiang, X., Petkov, N. (eds.) Computer Analysis of Images and Patterns, pp. 476–483. Springer (2009)

  30. Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  31. Raviart, P.A., Thomas, J.M.: A mixed finite element method for 2-nd order elliptic problems. In: Galligani, I., Magenes, E. (eds.) Mathematical Aspects of Finite Element Methods, pp. 292–315. Springer (1977)

  32. Rockafellar, R.T.: Convex Analysis. Princeton, Princeton University Press (1972)

    Google Scholar 

  33. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  34. Scherzer, O.: Denoising with higher order derivatives of bounded variation and an application to parameter estimation. Computing 60(1), 1–27 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  35. Schnörr, C.: Segmentation of visual motion by minimizing convex non-quadratic functionals. In: 12th International Conference on Pattern Recognition, Jerusalem, Israel (1994), pp. 661–663 (1994)

  36. Yuan, J., Schörr, C., Steidl, G.: Simultaneous higher-order optical flow estimation and decomposition. SIAM J. Sci. Comput. 29(6), 2283–2304 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  37. Zhang, L., Wu, X., Buades, A., Li, X.: Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. imaging 20(2), 023,016 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank Kristian Bredies, Martin Holler (both University of Graz) and Christoph Schnörr (University of Heidelberg) for useful discussions and links to literature.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eva-Maria Brinkmann.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work has been supported by ERC via Grant EU FP 7 - ERC Consolidator Grant 615216 LifeInverse. JSG acknowledges support by The Alan Turing Institute under the EPSRC Grant EP/N510129/1 and by the NIHR Cambridge Biomedical Research Centre. The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Variational Methods for Imaging and Vision, where work on this paper was undertaken, supported by EPSRC Grant no EP/K032208/1 and the Simons Foundation.

Appendices

Appendix A: Derivation of Nullspaces

In the following, we aim at characterising the set of all \(u \in L^2(\varOmega )\) for which \(R_{\varvec{\beta }}(u) = 0\) holds.

At first we consider the case \(\beta _2=0\) and \(\beta _3,\beta _4 > 0\). Following the line of argument for the derivation of the nullspaces in Sect. 3, it is clear that in order to be in the nullspace u has to satisfy

$$\begin{aligned} u(x) = U(x_1 + x_2) + V(x_1 - x_2) = U_1(x_1) + U_2(x_2). \end{aligned}$$

Calculation of first- and second-order derivatives of u then yields the following identities for the gradient and the Hessian of u:

$$\begin{aligned} \nabla u (x)&= \begin{pmatrix} \frac{\partial }{\partial x_1}U(x_1 + x_2) + \frac{\partial }{\partial x_1}V(x_1 - x_2)\\ \frac{\partial }{\partial x_2}U(x_1 + x_2) - \frac{\partial }{\partial x_2}V(x_1 - x_2) \end{pmatrix}\\&= \begin{pmatrix} \frac{\partial }{\partial x_1}U_1(x_1) + \frac{\partial }{\partial x_1}U_2(x_2)\\ \frac{\partial }{\partial x_2}U_1(x_1) + \frac{\partial }{\partial x_2}U_2(x_2) \end{pmatrix}\\&= \begin{pmatrix} \frac{\partial }{\partial x_1}U_1(x_1)\\ \frac{\partial }{\partial x_2}U_2(x_2) \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} Hu = \begin{pmatrix} (Hu)_{11} &{} (Hu)_{12}\\ (Hu)_{21} &{} (Hu)_{22} \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} (Hu)_{11}(x)&= \frac{\partial ^2}{\partial x_1^2}U(x_1 + x_2) + \frac{\partial ^2}{\partial x_1^2}V(x_1 - x_2)\\&= \frac{\partial ^2}{\partial x_1^2}U_1(x_1)\\ (Hu)_{12}(x)&= \frac{\partial ^2}{\partial x_1\partial x_2}U(x_1 + x_2)- \frac{\partial ^2}{\partial x_1\partial x_2}V(x_1 - x_2)\\&= \frac{\partial ^2}{\partial x_1 \partial x_2}U_1(x_1) + \frac{\partial ^2}{\partial x_1 \partial x_2}U_2(x_2) = 0\\ (Hu)_{21}(x)&= \frac{\partial ^2}{\partial x_1\partial x_2}U(x_1 + x_2)- \frac{\partial ^2}{\partial x_1\partial x_2}V(x_1 - x_2)\\&= \frac{\partial ^2}{\partial x_1 \partial x_2}U_1(x_1) + \frac{\partial ^2}{\partial x_1 \partial x_2}U_2(x_2) = 0\\ (Hu)_{22}(x)&= \frac{\partial ^2}{\partial x_2^2}U(x_1 + x_2) + \frac{\partial ^2}{\partial x_2^2}V(x_1 - x_2)\\&= \frac{\partial ^2}{\partial x_2^2}U_2(x_2). \end{aligned}$$

In particular, we observe:

$$\begin{aligned} \frac{\partial ^2}{\partial x_1^2} U_1(x_1) = \frac{\partial ^2}{\partial x_2^2} U_2(x_2) \quad \text { for all } x_1, x_2, \end{aligned}$$

which can only be true if \(\frac{\partial ^2}{\partial x_1^2} U_1(x_1)\) and \(\frac{\partial ^2}{\partial x_2^2} U_2(x_2)\) are equal and constant, i.e. \(\frac{\partial ^2}{\partial x_1^2} U_1(x_1) = \frac{\partial ^2}{\partial x_2^2} U_2(x_2) = c\).

Twofold integration of \(\frac{\partial ^2}{\partial x_1^2} U_1\) respectively \(\frac{\partial ^2}{\partial x_2^2} U_2\) on condition that the former only depends on \(x_1\) while the latter only depends on \(x_2\) yields:

$$\begin{aligned} \frac{\partial }{\partial x_1} U_1(x_1) = \int c \, \mathrm{d}x_1 = cx_1 + d_1,\\ \frac{\partial }{\partial x_2} U_2(x_2) = \int c \, \mathrm{d}x_2 = cx_2 + e_1 \end{aligned}$$

and thus

$$\begin{aligned}&U_1(x_1) = \int cx_1 + d_1 \, \mathrm{d}x_1 = cx_1^2 + d_1x_1 + d_0\\&U_2(x_2) = \int cx_2 + e_1 \, \mathrm{d}x_1 = cx_2^2 + e_1x_2 + e_0\\&\Longrightarrow u = c(x_1^2 + x_2^2) + d_1 x_1 + e_1 x_2 + (d_0 + e_0). \end{aligned}$$

Consequently the nullspace only consists of functions that are linear combinations of \(x_1^2 + x_2^2, x_1, x_2\) and 1.

We continue with the case \(\beta _3 = 0\) and \(\beta _2,\beta _4 > 0\). By the discussion of the nullspaces in Sect. 3u has to be harmonic, i.e.

$$\begin{aligned} \frac{\partial ^2}{\partial x_1^2}u(x) + \frac{\partial ^2}{\partial x_2^2}u(x) = 0, \end{aligned}$$

and moreover it has to be of the form \(u(x) = U_1(x_1) + U_2(x_2)\). Taking into account the calculations of the first- and second-order partial derivatives in the previous case, we easily see that the above equality is equivalent to

$$\begin{aligned} \frac{\partial ^2}{\partial x_1^2}U_1(x_1) + \frac{\partial ^2}{\partial x_2^2}U_2(x_2) = 0 \quad \text { for all } x_1,x_2, \end{aligned}$$

which obviously can only be true if \(\frac{\partial ^2}{\partial x_1^2}U_1(x_1)\) and \(\frac{\partial ^2}{\partial x_2^2}U_2(x_2)\) are constant with constants summing to zero. On this basis, we analogously to the previous case integrate \(\frac{\partial ^2}{\partial x_1^2} U_1\) and \(\frac{\partial ^2}{\partial x_2^2} U_2\) twice on condition that the former only depends on \(x_1\) and the latter only depends on \(x_2\)

$$\begin{aligned} \frac{\partial }{\partial x_1} U_1(x_1)&= \int c \, \mathrm{d}x_1 = cx_1 + d_1,\\ \frac{\partial }{\partial x_2} U_2(x_2)&= \int -c \, \mathrm{d}x_2 = -cx_2 + e_1 \end{aligned}$$

and hence

$$\begin{aligned}&U_1(x_1) = \int cx_1 + d_1 \, \mathrm{d}x_1 = cx_1^2 + d_1x_1 + d_0\\&U_2(x_2) = \int -cx_2 + e_1 \, \mathrm{d}x_1 = -cx_2^2 + e_1x_2 + e_0\\&\Longrightarrow u = c(x_1^2 - x_2^2) + d_1 x_1 + e_1 x_2 + (d_0 + e_0). \end{aligned}$$

The nullspace thus only consists of functions that are linear combinations of \(x_1^2 - x_2^2, x_1, x_2\) and 1.

Finally, we study the case \(\beta _4 = 0\) and \(\beta _2, \beta _3 > 0\). Analogous to the previous case we argue that by the characterisation of the nullspaces in Sect. 3u is of the form \(u(x) = U(x_1 + x_2) + V(x_1 - x_2)\) and again has to be harmonic, i.e.

$$\begin{aligned} \frac{\partial ^2}{\partial x_1^2}u(x) + \frac{\partial ^2}{\partial x_2^2}u(x) = 0. \end{aligned}$$

Again, we reconsider the first- and second-order partial derivatives from the first case and obtain for all \(x_1,x_2\)

$$\begin{aligned}&2 \left( \frac{\partial ^2}{\partial x_1^2}U(x_1 + x_2) + \frac{\partial ^2}{\partial x_2^2}V(x_1 - x_2)\right) = 0 \end{aligned}$$

which implies that \(\frac{\partial ^2}{\partial x_1^2}U\) and \(\frac{\partial ^2}{\partial x_2^2}V\) are constant with constants summing to zero. By twofold integration of \(\frac{\partial ^2}{\partial x_1^2}U\) and \(\frac{\partial ^2}{\partial x_2^2}V\) on condition that the former depends on \(x_1 + x_2\) and the latter depends on \(x_1 - x_2\) we thus obtain:

$$\begin{aligned}&\frac{\partial }{\partial x_1} U(x_1 + x_2) = \int c \, d(x_1 + x_2) = c (x_1 + x_2) + d_1,\\&\frac{\partial }{\partial x_2} V(x_1 - x_2) = \int -c \, d(x_1 - x_2) = -c(x_1 - x_2) + e_1 \end{aligned}$$

and hence

$$\begin{aligned}&U(x_1 + x_2) = \int c(x_1 + x_2) + d_1 \,d(x_1 + x_2)\\&\qquad \qquad \quad = c(x_1 + x_2)^2 + d_1(x_1 + x_2) + d_0\\&V(x_1 - x_2) = \int -c(x_1 - x_2) + e_1 \,d(x_1 - x_2)\\&\qquad \qquad \quad = -c(x_1 - x_2)^2 + e_1(x_1 - x_2) + e_0\\&\Longrightarrow u = 4c x_1 x_2 + (d_1 + e_1) x_1 + (d_1 - e_1) x_2 + (d_0 + e_0). \end{aligned}$$

As a result, the nullspace contains all functions that are linear combinations of \(x_1x_2,x_1,x_2\) and 1.

Appendix B: Proof of Theorem 4

Theorem 4

Let \(\beta _i \ge 0\) for \(i = 1,\dots ,4\) and let \(\beta _3 = \beta _4\). Then the regulariser \(R_{\varvec{\beta }}(u)\) is rotationally invariant, i.e. for an orthonormal rotation matrix \(\varvec{Q} \in \mathbb {R}^{2\times 2}\) with

$$\begin{aligned} \varvec{Q}(\theta ) = \begin{pmatrix} \cos (\theta ) &{} -\sin (\theta )\\ \sin (\theta ) &{} \cos (\theta ) \end{pmatrix} \quad \text { for } \theta \in \left[ 0,2\pi \right) \end{aligned}$$

and for \(u \in BV(\varOmega )\) it holds that \(\check{u} \in BV(\varOmega )\), where \(\check{u}=u\circ \varvec{Q}\), i.e. \(\check{u}(x) = u(\varvec{Q}x)\) for a.e. \(x \in \varOmega \), and

$$\begin{aligned} R_{\varvec{\beta }}(\check{u}) = R_{\varvec{\beta }}(u). \end{aligned}$$

Proof

In order to prove the assertion, we consider \(\check{u}=u \circ \varvec{Q}\) and show that we obtain \(R_{\varvec{\beta }}(\check{u}) = R_{\varvec{\beta }}(u)\), where as before

$$\begin{aligned} \begin{aligned}&R_{\varvec{\beta }}(u)\\&\; = \inf _{w \in \mathcal {M}(\varOmega ,\mathbb {R}^2)} \Vert \nabla u - w \Vert _{\mathcal {M}(\varOmega ,\mathbb {R}^2)}\\&\qquad \qquad \quad + \Vert \text {diag}({\varvec{\beta }})\nabla _N w\Vert _{\mathcal {M}(\varOmega ,\mathbb {R}^4)}. \end{aligned} \end{aligned}$$

Inserting \(\check{u}\) in the first term of the regulariser, we realise that we obtain the equivalence to the first term of \(R_{\varvec{\beta }}(u)\) by choosing \(\check{w}=\varvec{Q}^{\top }w \circ \varvec{Q}\), i.e.

$$\begin{aligned} \int _\varOmega \varphi (x)~\mathrm{d}\check{w} = \int _\varOmega \varvec{Q} \varphi (\varvec{Q}^T x) ~\mathrm{d}w, \qquad \forall \varphi \in C_0(\varOmega ;\mathbb {R}^2), \end{aligned}$$

since

$$\begin{aligned}&\Vert \nabla \check{u} - \check{w} \Vert _{\mathcal {M}(\varOmega ,\mathbb {R}^2)}\\&\; = \Vert \varvec{Q}^{\top }\nabla u\circ \varvec{Q} - \varvec{Q}^{\top }w\circ \varvec{Q}\Vert _{\mathcal {M}(\varOmega ,\mathbb {R}^2)}\\&\; = \Vert \varvec{Q}^{\top }\left( \nabla u \circ \varvec{Q} - w \circ \varvec{Q}\right) \Vert _{\mathcal {M}(\varOmega ,\mathbb {R}^2)}\\&\; = \Vert \nabla (u \circ \varvec{Q}) - w \circ \varvec{Q} \Vert _{\mathcal {M}(\varOmega ,\mathbb {R}^2)} \end{aligned}$$

Thus, if we can show that for \(\check{w}=\varvec{Q}^{\top }w \circ \varvec{Q}\) we also obtain the equivalence of the second term of the regulariser to the second term of \(R_{\varvec{\beta }}(u)\), we have proven the assertion. To this end we set \(v = \varvec{Q}^{\top }w\) and compute

$$\begin{aligned} v = \begin{pmatrix} \cos (\theta )w_1 + \sin (\theta )w_2\\ - \sin (\theta )w_1 + \cos (\theta )w_2 \end{pmatrix}. \end{aligned}$$

In addition we need the Jacobian matrix \(\nabla v\) of v, where

$$\begin{aligned} (\nabla v)_{11}&= \cos (\theta )\frac{\partial w_1}{\partial x_1} + \sin (\theta )\frac{\partial w_2}{\partial x_1},\\ (\nabla v)_{12}&= \cos (\theta )\frac{\partial w_1}{\partial x_2} + \sin (\theta )\frac{\partial w_2}{\partial x_2},\\ (\nabla v)_{21}&= -\sin (\theta )\frac{\partial w_1}{\partial x_1} + \cos (\theta ) \frac{\partial w_2}{\partial x_1},\\ (\nabla v)_{22}&= -\sin (\theta )\frac{\partial w_1}{\partial x_2} + \cos (\theta ) \frac{\partial w_2}{\partial x_2}. \end{aligned}$$

We can hence obtain the Jacobian matrix \(\nabla \check{w}\) of \(\check{w}\) by computing \(\nabla \check{w} = \varvec{Q}^{\top }\nabla v\) yielding

$$\begin{aligned}&(\nabla \check{w})_{11}\\&\; = \cos ^2(\theta ) \frac{\partial w_1}{\partial x_1} + \cos (\theta )\sin (\theta ) \frac{\partial w_2}{\partial x_1}\\&\qquad + \cos (\theta )\sin (\theta ) \frac{\partial w_1}{\partial x_2} + \sin ^2(\theta ) \frac{\partial w_2}{\partial x_2},\\&(\nabla \check{w})_{12}\\&\; = -\cos (\theta )\sin (\theta ) \frac{\partial w_1}{\partial x_1} - \sin ^2(\theta ) \frac{\partial w_2}{\partial x_1}\\&\qquad + \cos ^2(\theta ) \frac{\partial w_1}{\partial x_2} + \cos (\theta )\sin (\theta ) \frac{\partial w_2}{\partial x_2},\\&(\nabla \check{w})_{21}\\&\; = \cos ^2(\theta ) \frac{\partial w_2}{\partial x_1} - \cos (\theta )\sin (\theta ) \frac{\partial w_1}{\partial x_1}\\&\qquad + \cos (\theta )\sin (\theta ) \frac{\partial x_2}{\partial x_2} - \sin ^2(\theta ) \frac{\partial w_1}{\partial x_2},\\&(\nabla \check{w})_{22}\\&\; = -\cos (\theta )\sin (\theta ) \frac{\partial w_2}{\partial x_1} + \sin ^2(\theta ) \frac{\partial w_1}{\partial x_1}\\&\qquad + \cos ^2(\theta ) \frac{\partial w_2}{\partial x_2} - \cos (\theta )\sin (\theta )\frac{\partial w_1}{\partial x_2}. \end{aligned}$$

Based on the Jacobian \(\nabla \check{w}\), we can calculate the curl, the divergence and the two components of the shear for \(\check{w}\):

$$\begin{aligned} {{\mathrm{\text {curl}}}}(\check{w})&= (\nabla \check{w})_{21} - (\nabla \check{w})_{12}\\&= (\cos ^2(\theta ) + \sin ^2(\theta )) \left( \frac{\partial w_2}{\partial x_1} - \frac{\partial w_1}{\partial x_2}\right) \\&= {{\mathrm{\text {curl}}}}(w),\\ {{\mathrm{{div}}}}(\check{w})&= (\nabla \check{w})_{11} - (\nabla \check{w})_{22}\\&= (\cos ^2(\theta ) + \sin ^2(\theta )) \left( \frac{\partial w_1}{\partial x_1} + \frac{\partial w_2}{\partial x_2}\right) \\&= {{\mathrm{{div}}}}(w),\\ {{\mathrm{\text {sh}_1}}}(\check{w})&= (\nabla \check{w})_{22} - (\nabla \check{w})_{11}\\&= (\cos ^2(\theta ) - \sin ^2(\theta )) \left( \frac{\partial w_2}{\partial x_2} - \frac{\partial w_1}{\partial x_1}\right) \\&\qquad -2\cos (\theta )\sin (\theta ) \left( \frac{\partial w_1}{\partial x_2} + \frac{\partial w_2}{\partial x_1} \right) \\&= (\cos ^2(\theta ) - \sin ^2(\theta )) {{\mathrm{\text {sh}_1}}}(w)\\&\qquad - 2\cos (\theta )\sin (\theta ) {{\mathrm{\text {sh}_2}}}(w), \end{aligned}$$
$$\begin{aligned} {{\mathrm{\text {sh}_2}}}(\check{w})&= (\nabla \check{w})_{12} + (\nabla \check{w})_{21}\\&= (\cos ^2(\theta ) - \sin ^2(\theta )) \left( \frac{\partial w_1}{\partial x_2} + \frac{\partial w_2}{\partial x_1}\right) \\&\qquad -2\cos (\theta )\sin (\theta ) \left( \frac{\partial w_2}{\partial x_2} + \frac{\partial w_1}{\partial x_1} \right) \\&= (\cos ^2(\theta ) - \sin ^2(\theta )) {{\mathrm{\text {sh}_2}}}(w)\\&\qquad + 2\cos (\theta )\sin (\theta ) {{\mathrm{\text {sh}_1}}}(w), \end{aligned}$$

Next, we consider \(\vert \text {diag}({\varvec{\beta }})\nabla _N \check{w}\vert \), where for the sake of readability, we define

$$\begin{aligned} a :=&(\cos ^2(\theta ) - \sin ^2(\theta ))\\ b :=&\cos (\theta )\sin (\theta ). \end{aligned}$$

Then we obtain:

$$\begin{aligned}&\vert \text {diag}({\varvec{\beta }})\nabla _N \check{w} \vert \\&\; = \beta _1 ({{\mathrm{\text {curl}}}}(\check{w}))^2 + \beta _2 ({{\mathrm{{div}}}}(\check{w}))^2\\&\qquad + \beta _3 ({{\mathrm{\text {sh}_1}}}(\check{w}))^2 + \beta _4 ({{\mathrm{\text {sh}_2}}}(\check{w}))^2\\&\; = \beta _1 ({{\mathrm{\text {curl}}}}(w))^2 + \beta _2 ({{\mathrm{{div}}}}(w))^2\\&\qquad + \beta _3 a^2 ({{\mathrm{\text {sh}_1}}}(w))^2 - \beta _3 ab {{\mathrm{\text {sh}_1}}}(w){{\mathrm{\text {sh}_2}}}(w)\\&\qquad + \beta _3 4 b^2 ({{\mathrm{\text {sh}_2}}}(w))^2\\&\qquad + \beta _4 a^2 ({{\mathrm{\text {sh}_2}}}(w))^2 + \beta _4 ab{{\mathrm{\text {sh}_1}}}(w){{\mathrm{\text {sh}_2}}}(w)\\&\qquad + \beta _4 4 b^2 ({{\mathrm{\text {sh}_1}}}(w))^2 \end{aligned}$$

We conclude the proof by setting \(\beta _3 = \beta _4\) yielding the equivalence of \(\vert \text {diag}({\varvec{\beta }})\nabla _N \check{w}\vert \) and \(\vert \text {diag}({\varvec{\beta }})\nabla _N w\vert \), which then in turn implies \(R_{\varvec{\beta }}(\check{u})=R_{\varvec{\beta }}(u)\).

$$\begin{aligned}&\vert \text {diag}({\varvec{\beta }})\nabla _N \check{w} \vert \\&\; = \beta _1 ({{\mathrm{\text {curl}}}}(w))^2 + \beta _2 ({{\mathrm{{div}}}}(w))^2\\&\qquad + \beta _3 a^2 ({{\mathrm{\text {sh}_1}}}(w))^2 + \beta _3 4 b^2 ({{\mathrm{\text {sh}_1}}}(w))^2\\&\qquad + \beta _4 a^2 ({{\mathrm{\text {sh}_2}}}(w))^2 + \beta _4 4 b^2 ({{\mathrm{\text {sh}_2}}}(w))^2\\&\; = \beta _1 ({{\mathrm{\text {curl}}}}(w))^2 + \beta _2 ({{\mathrm{{div}}}}(w))^2\\&\qquad + \beta _3 (\cos ^2(\theta ) + \sin ^2(\theta ))^2 ({{\mathrm{\text {sh}_1}}}(w))^2\\&\qquad + \beta _4 (\cos ^2(\theta ) + \sin ^2(\theta ))^2 ({{\mathrm{\text {sh}_2}}}(w))^2\\&\; = \beta _1 ({{\mathrm{\text {curl}}}}(w))^2 + \beta _2 ({{\mathrm{{div}}}}(w))^2\\&\qquad + \beta _3 ({{\mathrm{\text {sh}_1}}}(w))^2 + \beta _4 ({{\mathrm{\text {sh}_2}}}(w))^2\\&\; = \vert \text {diag}({\varvec{\beta }})\nabla _N w \vert . \end{aligned}$$

\(\square \)

Appendix C: Alternative Visualisations of Parts of Figs. 1, 4 and 5

figure d

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brinkmann, EM., Burger, M. & Grah, J.S. Unified Models for Second-Order TV-Type Regularisation in Imaging: A New Perspective Based on Vector Operators. J Math Imaging Vis 61, 571–601 (2019). https://doi.org/10.1007/s10851-018-0861-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-018-0861-6

Keywords

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy