0% found this document useful (0 votes)
10 views11 pages

Distributions

The document discusses spectral theory and distribution theory, particularly in relation to the Fourier Transform and impulsive forces. It introduces key concepts such as distributions, test functions, and the action of distributions on test functions, along with examples and propositions that characterize these mathematical constructs. The document also covers operations with distributions, including differentiation and convolution, and introduces tempered distributions as continuous linear functionals on the Schwartz space.

Uploaded by

Angelo Oppio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views11 pages

Distributions

The document discusses spectral theory and distribution theory, particularly in relation to the Fourier Transform and impulsive forces. It introduces key concepts such as distributions, test functions, and the action of distributions on test functions, along with examples and propositions that characterize these mathematical constructs. The document also covers operations with distributions, including differentiation and convolution, and introduces tempered distributions as continuous linear functionals on the Schwartz space.

Uploaded by

Angelo Oppio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Teoria Espectral

The idea of these notes is to complement Section 1.3 in the notes regard-
ing Fourier Transform. The material presented here was taking from the
book of Jeffrey Rauch, Partial Differential Equations, Graduate Texts in
Mathematics, Springer-Verlag (see [1]).

1. Distributions
The distribution theory arises in several contexts. One is the treatment
of impulsive forces. Newton’s second law affirms that the rate of change of
dp
momentum is equal to the force applied, = F . Consider an intense force
dt
which acts over a very short interval of time t0 < t < t0 + ∆t. An example
is theR force applied by the strike of a hammer. The impulse, I, is defined as
I := F (t) dt thus
p(t0 + ∆t) = p(t0 ) + I.
In the limit, as ∆t tends to zero, one arrives to an idealized force which acts
instantaneously to produce a jump I in the momentum p. Formally, the
force law satisfies
Z
(1.1) F = 0 for t 6= 0 and F (t) dt = I.

This idealized force is denoted Iδt0 , and δt0 is called Dirac’s delta function
though no function can satisfy (1.1). The idealized equation of motion is
dp
= δt0 . The solution satisfies p(t+) − p(t−) = I. Such idealizations have
dt
shown to be useful in a variety of problems of mechanics and electricity.
The mathematical framework was developed by Lawrence Schwartz in the
1940’s.
We introduce some notation next. Let Ω ⊂ Rn an open subset. The set
of all infinitely differentiable functions with compact support C0∞ (Ω) will be
denoted by D(Ω) and C ∞ (Ω) the set of all infinitely differentiable functions
on Ω will be denoted by E(Ω). These sets of functions are referred as test
functions.

Definition 1.1. A distribution on an open Ω ⊂ Rn is a linear map l :


D(Ω) → C, which is continuous in the sense that {ϕn } ⊂ D(Ω) satisfies
(i) there is a compact K ⊂ Ω such that for all n, supp(ϕn ) ⊂ K
and
(ii) there is a ϕ ∈ D(Ω) such that for all α ∈ Nn , ∂ α ϕn converges
uniformly to ∂ α ϕ,
then l(ϕn ) → l(ϕ). The set of all distributions on Ω is denoted by D0 (Ω).
When ϕn , ϕ satisfy (i) and (ii) we say that ϕn converges to ϕ in D(Ω).
1
2

The action of a distribution l ∈ D0 (Ω) on a test function ϕ ∈ D(ω) is


usually denoted hf, ϕi. The set D0 (Ω) is a complex vector space.

Example 1.2. If f ∈ L1loc (Ω), then there is a natural distribution lf defined


by
Z
hl, ϕi = f (x)ϕ(x) dx.

In this sense, the distributions are generalizations of functions and are some-
times called generalized functions. Two locally integrable functions define
the same distribution if and only if the functions are equal almost every-
where. We say that a distribution l is a locally integrable function and write
l ∈ L1loc (Ω) if l = lf for a f ∈ L1loc (Ω). Similarly, we say that l is continuous
(resp. C ∞ (Ω)) if l = lf , for a f ∈ C(Ω) (resp. C ∞ (Ω)).

Example 1.3. If x0 ∈ Ω, then hl, ϕi ≡ ϕ(x0 ) is a distribution denoted δx0


and called the Dirac delta at x0 . When x0 is not mentioned it is assumed to
be the origin. More generally, hl, ϕi ≡ ∂ α ϕ(x0 ) is a distribution.

The following proposition characterizes the distributions. More precisely.

Proposition 1.4. A linear map l : D(Ω) → C belongs to D0 (Ω) if and only


if for every compact subset K ⊂ Ω there is an integer N (K, l) and a constant
c ∈ R such that for all ϕ ∈ D(Ω) with support in K
X
(1.2) |hl, ϕi| ≤ ckϕkC N , kϕkC N = max |∂ α ϕ|.
|α|≤N

Proof. If l ∈ D0 (Ω) then is clear that (1.2) holds.


Suppose now that (1.2) does not hold for a compact K. For each integer
n, choose ϕn ∈ D(Ω) with support in K such that
1
(1.3) |hl, ϕi| > 1 and, kϕkC N < .
n
Then ϕn satisfy (i) and (ii) with ϕ = 0, but hl, ϕi does not converge to zero
thus l is not a distribution. 

Definition 1.5. A sequence of distributions ln ∈ D0 (Ω) converges to l ∈


D0 (Ω) if and only if for every test function ϕ ∈ D(Ω), ln (ϕ) → l(ϕ). This
D0
convergence is denoted ln * l or ln → l.

Example 1.6. If j ∈ D(Ω) with j(x) dx = 1, let j (x) = −n j(x/). Then
R
j → δ0 .
3

1.1. Operations with distributions. The great utility of distributions


lies on the fact that the standard operations of calculus extend to D0 (Ω).
For instance, one can differentiate distributions. This is quite important in
the study of differential equations.
The recipe for defining operations on distributions is basically the same:
pass the operator onto the test function.

Example 1.7. We recall the translation operator τy f = f (x − y), y ∈ Rn .


Let l ∈ D0 (Rn ), the translate of l by the vector y, τy l, is defined as follows.
If l were equal to the function f , then
Z Z
hτy I, ϕi = f (x − y)ϕ(x) dx = f (z)ϕ(z + y) dz = hl, τ−y ϕi, ϕ ∈ D(Rn ).

This motivates the definition, hτh l, ϕi = hl, τ−h ϕi. It is easy to check that τh l
defined as above is a distribution and that definition agrees with τh f when
l = lf .

Example 1.8. To differentiate a distribution l on Rn , we form the differ-


∂l
ence quotients which could converge to . Let ej be the vector with jth
∂xj
coordinate equals 1 and 0 in the others. The difference quotients are given
by
−hej l − l
Dτ E D τ ϕ − ϕE
hej
(1.4) , ϕ ≡ l, , ϕ ∈ D(Rn ).
h h
∂ϕ
The test functions on the right converge to − , so the continuity of l im-
∂xj
∂ϕ
plies that the right hand side of (1.4) converges to hl, − i. This suggests
∂xj
that
D ∂l E ∂ϕ
(1.5) , ϕ ≡ hl, − i.
∂xj ∂xj
This defines a distribution and if f ∈ C 1 (Ω) and l = lf , the derivatives of
l are equal to the distributions l ∂f . Thus the operator ∂/∂xj on D0 is an
∂ xj

extension of ∂/∂xj on D.
Let us apply the above procedure to find the derivative in distributions
sense of the Heaviside function H(x) = χ[0,∞) (x) defined on R. The differ-
ence quotient
τ−h H − H
= h−1 χ[0,h]
h
converges to δ in the sense of distributions. Thus dH dx = δ. Observe that
the difference quotient converge to zero almost everywhere. Since H is not
constant, zero should not be the desired derivative. The pointwise limit gives
the wrong answer and the distribution derivative is the right answer.
4

The operations on distributions discussed so far are particular cases of a


general algorithm.
Proposition 1.9 (P.D. Lax). Suppose that L is a linear map from D(Ω1 )
to D(Ω2 ), which is sequentially continuous in the sense that ϕn → ϕ implies
L(ϕn ) → L(ϕ). Suppose, in addition, that there is an operator L0 , sequen-
tially continuous from D(Ω2 ) to D(Ω1 ), which is the transpose of l in the
sense that
hL(ϕ), ψi = hϕ, L0 (ψ)i for all ϕ ∈ D(Ω1 ), ψ ∈ D(Ω2 ).
Then the operator L extends to a sequentially continuous map of D0 (Ω2 ) to
D0 (Ω1 ) given by
(1.6) hL(l), ψi = hl, L0 (ψ)i for all l ∈ D0 (Ω1 ), and ψ ∈ D(Ω2 ).
Proof. The sequential continuity of L0 shows that L(l) defined in (1.6) is a
distribution. If l = lϕ for some ϕ ∈ D(Ω1 ), then
Z Z
(1.7) hL(l), ψi ≡ hl, L0 (ψ)i = ϕ(x)L0 (ψ)(x) dx = L(ϕ)(x)ψ(x) dx,
Ω1 Ω2
the last equality from the hypothesis that L0 is the transpose of L. Thus
L(l) is the distribution associated to L(ϕ) which proves that L defined by
(1.6) extends L D0 .
Finally, if ln * l in D0 (Ω1 ), it follows immediately from (1.6) that L(ln ) *
L(l) proving the sequentially continuity of L.


Remark 1.10. The proof of the uniqueness of this extension can be seen in
[1] Appendix Proposition 8.

Example 1.11. If a(x) ∈ C ∞ (Ω), (≡ E(Ω)), then the map L(ϕ) ≡ aϕ is


equal to its own transpose. That is,
Z Z
hL(ϕ), ψi = (a(x)ϕ(x))(ψ) dx = (ϕ(x))(a(x)ψ) dx = hϕ, L(ψ)i.

Thus for l ∈ D0 (Ω), al is a well-defined distribution given by hal, ϕi ≡ hl, aϕi.


Example 1.12. If Ω2 = y + Ω and Lτ is translation by y, then L0 = τ−y
is sequentially continuous. Therefore for l ∈ D0 (Ω1 ) the translates of l are
well defined by hτy l, ϕi ≡ hl, τ−y ϕi. The reflection operator ϕ(x)
e = ϕ(−x)
is its own transpose, thus ˜l is a well-defined distribution on the reflection of
Ω.
Example 1.13. If L = ∂ α (∂ α ≡ ∂xα11 . . . ∂xαnn , α ∈ Zn and |α| = α1 +
· · · + αn ). Integration by parts gives L0 = (−1)|α| ∂ α which is sequentially
continuous on D. Thus the derivatives of distributions are defined by
h∂ α l, ϕi ≡ hl, (−1)|α| ∂ α ϕi.
5

Once we have defined multiplication and derivatives we can compute a


product rule as being
h∂(al), ψi ≡ hl, −a∂ψi = hl, −∂(aψ)i + hl, (∂a)ψi = ha∂l + (∂a)l, ψi.
Following this procedure inductively we can define the usual Leibniz for
∂ α (al).
aα (x)∂ α is a linear partial differential operator with co-
P
If P (x, D) =
efficients in E(Ω), then P maps D0 (Ω) to itself with hP l, ϕ0 ≡ hl, P 0 ϕi where
the transpose of P is given by
X
P 0ψ = (−1)|α| ∂ α (aα ψ).

1.2. Convolution. Suppose that Ω = Rn and ϕ ∈ D0 (Rn ). Ler L be the


operator L(ψ) = ϕ∗ψ. The Leibniz rule for differentiating under the integral
implies that L maps D0 (Rn ) continuously to itself. The Fubini theorem
e Thus ϕ ∗ l makes sense
shows that the transpose of L is convolution with ϕ.
for any l ∈ D0 (Rn ) and it is given by
hϕ ∗ l, ψi ≡ hl, ϕψi.
e
Example 1.14. We compute ϕ ∗ δ
Z
hϕ ∗ δ, ψi ≡ hδ, ϕψi e ∗ ψ)(0) =
e = (ϕ ϕ(y)ψ(y) dy = hϕ, ψi.

Therefore ϕ ∗ δ = ϕ.
It is not difficult to show that for l ∈ D0 (Rn ),
∂ α (ϕ ∗ l) = ϕ ∗ ∂ α l = (∂ α ϕ) ∗ l.
We end this section with the following result whose proof can be see in
[1] Appendix Proposition 3.
Proposition 1.15. If l ∈ D0 (Rn ) and ϕ ∈ D(Rn ), then l ∗ ϕ is equal to the
C ∞ function whose value at x is hl, τx (ϕ)i.
e

1.3. Tempered distributions. Recall that S(Rn ) denotes the Schwartz


space, the space of the C ∞ functions decaying at infinity, that is,
S(Rn ) = {ϕ ∈ C ∞ : |||ϕ|||α,β ≡ kxα ∂ β ϕkL∞ (Rn ) < ∞, for any α, β ∈ (Z+ )n }.

Definition 1.16. A tempered distribution is a continuous linear func-


tional on S(Rn ). The set of all tempered distribution is denoted by S0 (Rn ).
Proposition 1.17. A linear map T : S(Rn ) → C is continuous if and only
if there exist N ∈ N and c ∈ R such that for all ϕ ∈ S(Rn )
X
(1.8) |hT, ϕi| ≤ c kxβ ∂ α ϕkL∞ (Rn ) .
|α|≤N, |β|≤N
6

Corollary 1.18. A distribution T ∈ S0 (Rn ) extends uniquely to an element


of S0 (Rn ) if and only if there exist N ∈ N and c ∈ R such that (1.8) holds
for all ϕ ∈ D(Rd ).
In particular, we have
D(Rn ) ⊂ E0 (Rn ) ⊂ S0 (Rn ) ⊂ D0 (Rn ).
Example 1.19. If f is a Lebesgue measurable function on Rn such that
for some M , (1 + |x|2 )−M f ∈ L1 (Rn ), then the distribution defined by f is
tempered since
hf, ϕi = h(1 + |x|2 )−M f, (1 + |x|2 )M ϕi
≤ k(1 + |x|2 )−M f kL1 k(1 + |x|2 )M ϕkL∞ ≤ cf,M |||ϕ|||2M,0 .
Example 1.20. If f ∈ Lp (Rn ), 1 ≤ p ≤ ∞, then f ∈ S0 since these
functions satisfy the condition of Example 1.19, if one chooses M so large
that (1 + |x|2 )−M ∈ Lq (Rn ) and then uses Hölder’s inequality.

Definition 1.21. A sequence Tn ∈ S0 (|rn ) converges to S0 (Rn ) if and only


if for all ϕ ∈ S(Rn )
hTn , ϕi → hT, ϕi as n → ∞.
S0
We write Tn → T .
In the next we mainly interested in extending to S0 the basic linear oper-
ators of analysis, for instance ∂ α and F.
Given a continuous linear operator L : S → S, the transpose L0 maps
S → S0 . For T ∈ S0 (Rn ), L0 T ∈ S0 is defined by
0

(1.9) hL0 T, ϕi ≡ hT, Lϕi for all ϕ ∈ S.


The next proposition shows that the identity can sometimes be used to
extend L.
Proposition 1.22. Suppose that L : S(Rn ) → S(Rn ) is a continuous lin-
ear map and that the restriction of the transpose operator to S, L0 S , is a
continuous map of S to itself. Then L has a unique sequentially continuous
extension to a linear map L : S0 (Rn ) → S0 (Rn ) defined by
hLT, ϕi ≡ hT, L0 ϕi, for all T ∈ S0 , ϕ ∈ S.
Proof. See Proposition 4 page 77 in [1]. 
Remark 1.23. This proposition identifies when passing the operator to the
test function yields a good extension.
For a general L, one will not even have L0 ϕ ∈ S for ϕ ∈ S. The hy-
pothesis on L0 is very restrictive. However, the translation, the dilation,
multiplication by a convenient function M (see Exercise 1.28 below), differ-
entiation ∂ α and Fourier transform F are operators which are included in
this proposition.
7

For T ∈ S0 and ϕ ∈ S, we have


h(∂ α )0 T, ϕi ≡ hT, (∂ α )ϕi.
If T ∈ S, the right-hand side is equal to
Z Z
T (x)∂xα ϕ(x) dx = (−∂x )α T (x)ϕ(x) dx = h(−∂x )α T, ϕi.

Thus, for such T , (∂ α )0 T = (−∂)α T .


Similarly, for T ∈ S0 and ϕ ∈ S,
hFT, ϕi ≡ hT, Fϕi.
For T ∈ S, the duality identity
hFϕ, ψi = hϕ, Fψi, for all ϕ, ψ ∈ S,
shows that this is equal to hFT, ϕi, whence F0 S
= F.
1.4. Applications. First consider the solvability of the equation
(1.10) (1 − ∆)u = f
For u, f in S0 this is equivalent to
u = Ff,
(1 + |ξ|2 )b
hence
(1.11) b = (1 + |ξ|2 )−1 Ff.
u

Proposition 1.24. For any f ∈ S0 (Rn ) there exists exactly one solution
u ∈ S0 (Rn ) to (1.10). The solution is given by formula (1.11). In particular,
if f ∈ S, then u ∈ S. If f ∈ L2 , then for all |α| ≤ 2, Dα u ∈ L2 (Rn ).
The second application is a Liouville-type theorem. More precisely.
Theorem 1.25 (Generalized Liouville Theorem). Suppose that P (D) is a
constant coefficient partial differential operator such that P (ξ) 6= 0 for ξ 6= 0.
If u ∈ S0 (Rn ) satisfies P u = 0, then u is a polynomial in x.
Proof. Taking Fourier transform of the equation we obtain
F(P (D))u = P (ξ)b
u = 0.
Since P (ξ) 6= 0 if ξ 6= 0 it follows that supp u
b ⊂ {0}.
Thus Fu has to be a finite linear combination of derivatives of the delta
function X
u
b= cα Dα δ.
Applying the inverse Fourier transform we get
X X X
u= cα FDα δ = cα (−x)α Fδ = cα (−x)α (2π)−n/2 ,
a polynomial in x. 
Corollary 1.26. The only bounded harmonic (resp. holomorphic) functions
on Rn (resp. C) are the constants.
8

Next we consider the wave equation,


∂2u ∂2u
(1.12) (x, t) = (x, t).
∂t2 ∂x2
Both sides of the equation make sense if u is a distribution. If the equality
holds we say that u is weak solution. Recall that u is said a classical solution
if u ∈ C 2 (R2 ) and the identity is satisfied.
Consider a traveling wave u(x, t) = f (x − t), f ∈ L1loc (R). It is clear that
u ∈ L1loc (R2 ) and so it defines a distribution. Is it a weak solution?
Using the differentiation operator definition we find that
D ∂2 E D ∂2 E Z Z ∂2
u, ϕ = u, ϕ = f (x − t) ϕ(x, t) dxdt
∂t2 ∂t2 ∂t2
D ∂2 E D ∂2 E Z Z
∂2
u, ϕ = u, ϕ = f (x − t) ϕ(x, t) dxdt.
∂x2 ∂x2 ∂x2
Hence
D ∂2 ∂2 E ZZ  ∂2 ∂2 
u − u, ϕ = f (x − t) ϕ − ϕ (x, t) dxdt.
∂t2 ∂x2 ∂t2 ∂x2
We would to like to show that this is zero. To do so, we make the change of
variables y = x − t, z = x + t, dxdt = 12 dydz. and
∂2 ∂2 ∂ ∂
− = −4 .
∂t2 ∂x2 ∂y ∂z
Thus,
ZZ  ∂2 ∂2 
ZZ
∂2ϕ
f (x − t) − ϕ(x, t) dxdt = −2 f (y) (y, z) dzdy
∂t2 ∂x2 ∂y∂z
We claim that integration in z yields zero. Indeed, we observe that
Z b 2
∂ ϕ ∂ϕ ∂ϕ
(y, z) dz = (y, b) − (y, a).
a ∂y∂z ∂y ∂y
Thus
b
∂2ϕ
Z
(y, z) dz = 0
a ∂y∂z
∂ϕ
since ϕ and vanish in a bounded set. Therefore u(x, t) = f (x − t) is a
∂y
weak solution of (1.12).
Next we investigate whether log(x2 + y 2 ) is a weak solution of the Laplace
equation
 ∂2 ∂2 
(1.13) ∆u(x, y) = + u(x, y) = 0.
∂x2 ∂y 2
We have to check that
D ∂ 2 ∂2  E D  ∂2 ∂2  E
+ u, ϕ = u, + ϕ = 0 for all ϕ ∈ D(R2 ).
∂x2 ∂y 2 ∂x2 ∂y 2
9

Employing polar coordinates (r, θ) we have that


∂2 ∂2 ∂2 1 ∂ 1 ∂2
+ = + + and dxdy = r drdθ.
∂x2 ∂y 2 ∂r2 r ∂r r2 ∂θ2
Then for u(x, y) = log(x2 + y 2 ) we would like to know whether
Z 2π Z ∞  ∂2 1 ∂ 1 ∂2 
log r2 + + ϕ(r, θ) rdrdθ = 0
0 0 ∂r2 r ∂r r2 ∂θ2
is true. To avoid the singularity of u at the origin we will integrate in r in
(, ∞) and then we make  tends to 0.
We first note
Z 2π
1 ∂2 1 ∂ϕ 2π
(1.14) log r2 2 2 ϕ(r, θ) rθ = log r2 (r, θ) =0
0 r ∂θ r ∂θ 0

∂ϕ
since is periodic. Therefore this term is always 0.
∂θ
On the other hand we have
Z ∞ Z ∞
2 ∂ ∂
(1.15) log r ϕ(r, θ) dr = − (log r2 )ϕ(r, θ) dr − log(2 )ϕ(, θ).
 ∂r  ∂r
and
∞ ∞
∂2
Z Z
∂ ∂
r log r2 ϕ(r, θ) dr = − (r log r2 ) ϕ(r, θ) dr
 ∂r2  ∂r ∂r

−  log(2 ) ϕ(, θ)
∂r
Z ∞ 2
(1.16) ∂
= 2
(r log r2 )ϕ(r, θ) dr
 ∂r

−  log(2 ) ϕ(, θ)
∂r

+ (r log r2 ) ϕ(, θ).
∂r 

∂ ∂2 2 ∂ 2
Now (r log r2 ) = log r2 + 2, (r log r2 ) = and (log r2 ) =
∂r ∂r2 r ∂r r
Gathering together the information in (1.14), (1.15) and (1.16) we obtain
Z 2π Z ∞  ∂2 1 ∂ 1 ∂2 
log r2 + + ϕ(r, θ) rdrdθ
0  ∂r2 r ∂r r2 ∂θ2
Z 2π Z ∞ 
2 2
= − + ϕ(r, θ) drdθ
0  r r
Z 2π
+ (− log 2 + log 2 + 2)ϕ(, θ) dθ
0
Z 2π
∂ϕ
+ (− log 2 ) (, θ) dθ.
0 ∂r
10

Thus
Z 2π Z 2π
∂ϕ
(1.17) h∆u, ϕi = lim 2 ϕ(, θ) dθ −  log 2 (, θ) dθ.
→0 0 0 ∂r
Since ϕ is continuous ϕ(, θ) → ϕ(0, θ) as  → 0 and so the first term in
(1.17) approaches to 4πhδ, ϕi.
∂ϕ
In the second term in (1.17), (, θ) remains bounded while  log 2 → 0
∂r
as  → 0. Hence
∆ log(x2 + y 2 ) = 4πδ.
Therefore log(x2 + y 2 ) is not a weak solution of ∆u = 0.
The previous computations allow us to solve the Poisson equation
(1.18) ∆u = f for any f.

A final remark.
Remark 1.27. It is clear that S0 (Rn ) ⊂ D0 (Rn ). What is not true is that any
distribution in D0 (Rn ) is a tempered distribution. For example the function
2
f (x) = ex in R defines the distribution
Z ∞
2
hf, ϕi = ex ϕ(x) dx.
−∞
2
Observe that e−x /2 ∈ S(R) and so we have
Z ∞ Z ∞
x2 −x2 /2 2 /2
hf, ϕi = e e dx = ex = +∞
−∞ −∞
which does not define a tempered distribution.

Exercise 1.28. Prove


(i) If M ∈ C ∞ (Rn ) and ∀α ∈ (Z+ )n , there exist N, c such that
|∂ α M | ≤ c(1 + |x|)N ,
then the map f → M f is a continuous linear transformation of
S(Rn ) into itself.
(ii) If in addition, there exist γ, c > 0 such that
|M (x)| ≥ c(1 + |x|)−γ ,
then the mapping is one-to-one and onto with continuous inverse.
Exercise 1.29. Verify that if f satisfies
Z
|f (x)| dx ≤ c AN as A → ∞
|x|≤A
for some constants c and N , then
Z
|f (x)ϕ(x)| dx < ∞ ∀ϕ ∈ S(Rn ).
Rn
11

Therefore Z
f (x)ϕ(x) dx
Rn
defines a tempered distribution.

References
[1] J. Rauch, Partial Differential Equations, Graduate Texts in Mathematics, 128.
Springer-Verlag, New York, 1991. x+263 pp.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy