0% found this document useful (0 votes)
9 views13 pages

Jordan Product and Analytic Core Preservers

The document discusses core preservers on bounded linear operators on a Banach space. It defines the analytic core of an operator and the inner local spectral radius of an operator at a vector. It characterizes surjective maps that preserve the analytic core of the Jordan product of two operators.

Uploaded by

ismail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views13 pages

Jordan Product and Analytic Core Preservers

The document discusses core preservers on bounded linear operators on a Banach space. It defines the analytic core of an operator and the inner local spectral radius of an operator at a vector. It characterizes surjective maps that preserve the analytic core of the Jordan product of two operators.

Uploaded by

ismail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

JORDAN PRODUCT AND ANALYTIC

arXiv:2211.14116v1 [math.FA] 25 Nov 2022

CORE PRESERVERS
S. Elouazzani
Departement of Mathematics, Labo LIABM,
Faculty of Sciences, 60000 Oujda, Morocco
elouazzani.soufiane@ump.ac.ma
M. Elhodaibi
Departement of Mathematics, Labo LIABM,
Faculty of Sciences, 60000 Oujda, Morocco
m.elhodaibi@ump.ac.ma

Abstract Let B(X) be the algebra of all bounded linear operators on an


infinite-dimensional complex Banach space X. For an operator T ∈ B(X),
K(T ) denotes as usual the analytic core of T . We determine the form of
surjective maps φ on B(X) satisfying

K(φ(T )φ(S) + φ(S)φ(T )) = K(T S + ST )


for all T, S ∈ B(X).
2020 Mathematics Subject Classification. 47B49, 47A10, 47A11
Keywords— Analytic core; Inner local spectral radius; Jordan product; Pre-
server.

1 Introduction
Throughout this paper, B(X) denotes the algebra of all bounded linear operators
on an infinite-dimensional complex Banach space X. Recall that the local resolvent
set ρT (x) of an operator T ∈ B(X) at a point x ∈ X is the set of all λ in C for
which there exist an open neighborhood Uλ of λ in C and an X-valued analytic
function f : Uλ → X such that (T − µI)f (µ) = x for all µ ∈ Uλ . As it is known,

1
σT (x) denotes the local spectrum of T at x, it is the complement in C of ρT (x)
and is a compact subset of the spectrum σ(T ) of T , possibly empty. An operator
T ∈ B(X) is said to have the single − valued extension property (SVEP) if for
every open subset U of C, the equation (T − λI)f (λ) = 0 for all λ ∈ U , has
no nontrivial analytic solution f on U . If T has the SVEP, then σT (x) 6= ∅ for
all nonzero vector x ∈ X. Let r be a positive scalar, we denote by D(0, r) and
D(0, r), respectively, the open and the closed disc centered at the origin with radius
r. For every closed subset F of C, the glocal spectral of an operator T ∈ B(X)
is defined by XT (F ) := {x ∈ X : there exists an analytic function f : C \ F −→
X such that (T − λ)f (λ) = x for all λ ∈ C \ F }. The local spectral radius of T at
x ∈ X is defined by
1
rT (x) = lim sup kT n (x)k n ,
n→∞

is coincides with rT (x) = inf{r ≥ 0 : x ∈ XT (D(0, r))}, and also with the maximum
modulus of σT (x) if T has the SVEP, see for instance [19]. Recall that for an
operator T ∈ B(X), the analytical core K(T ) of T is the set of all x ∈ X for which
there exist δ > 0 and a sequence (xn ) ⊂ X such that x0 = x, T xn+1 = xn and
kxn k ≤ δn kxk for all n ≥ 0, for more information, see [3, 19]. Recall also that the
inner local spectral radius iT (x) of T at x ∈ X is defined by iT (x) := sup{r ≥ 0 :
x ∈ XT (C\D(0, r))}; see [20].
The study of local spectra preserver problems was initiated by A. Bourhim and T.
Ransford [12], they characterized additive maps on B(X) which preserve the local
spectrum of all operators T ∈ B(X) at each vector x ∈ X. After that, the maps
preserving local spectrum and local spectral radius have been studied by many
authors; see [4, 7, 9, 13, 14]. In [16], M.E. El Kettani and H. Benbouziane showed
that a surjective additive map φ on B(X) satisfies
iT (x) = 0 if and only if iφ(T ) (x) = 0
for all x ∈ X and T ∈ B(X) if and only if there exists a nonzero scalar c ∈ C such
that φ(T ) = cT for all T ∈ B(X). M. Elhodaibi and A. Jaatit, in [15], proved the
result of T. Jari [17] for all maps φ on B(X) (not necessarily surjective) satisfying
iφ(T )−φ(S) (x) = 0 ⇐⇒ iT −S (x) = 0
for every x ∈ X and T , S ∈ B(X) if and only if there exists a nonzero scalar
c ∈ C such that φ(T ) = cT + φ(0). For the inner local spectral radius preservers
of generalized product, A. Achchi [2], showed that if a surjective map φ on B(X)
satisfies
iφ(T1 )∗...∗φ(Tk ) (x) = 0 ⇐⇒ iTk ∗...∗Tk (x) = 0
for all x ∈ X and all T1 , ..., Tk ∈ B(X), then there exists a map γ : B(X) −→
C\{0} such that φ(T ) = γ(T )T . Let us mention other papers that characterize

2
nonlinear maps preserving certain spectral and local spectral quantities of product
and Jordan product of operators or matrices, see for instance [5, 6, 10, 11]. In the
present paper, we characterize surjective maps φ (not necessarily linear) on B(X)
that satisfying
K(T S + ST ) = K(φ(T )φ(S) + φ(S)φ(T ))
for all T, S ∈ B(X). This gives immediately the characterization of surjective maps
φ : B(X) −→ B(X) satisfying

iT S+ST (x) = 0 if and only if iφ(T )φ(S)+φ(S)φ(T ) (x) = 0

for all x ∈ X and T, S ∈ B(X).

2 Preliminaries and Notations


For every vector x ∈ X and every linear functional f ∈ X ∗ , where X ∗ is the
topological dual space of X. Denote by x ⊗ f the operator of rank at most one,
with (x ⊗ f )(z) = f (z)x for all z ∈ X. Recall that x ⊗ f is nilpotent if and only if
f (x) = 0, and it is idempotent if and only if f (x) = 1. Let F1 (X) denotes the set of
all rank at most one operators on X. For a subspace A of X, the subspace spanned
by A is denoted by span(A). Observe that, if f (x) = 0 then K(x ⊗ f ) = {0}, if
not then K(x ⊗ f ) = span{x}. Let dim(Y ) be the dimension of a subspace Y of
X, and let for every operator T ∈ B(X), N (T ) be the kernel of T and R(T ) be its
range.

In The next lemma, we give some basic properties of K(T ) and iT (x), see for
instance [3, 19, 20].

Lemma 1. Let T ∈ B(X), then the following statements hold.

(i) K(T ) ⊂ R(T ).

(ii) K(λT ) = K(T ) for all nonzero scalar λ ∈ C.

(iii) If M is a closed subspace of X and T M = M then M ⊂ K(T ).

(iv) If T is quasi-nilpotent then K(T ) = {0}.

(v) N (T − λ) ⊂ K(T ) for all nonzero scalar λ ∈ C.

(vi) K(T ) = {x ∈ X : 0 6∈ σT (x)} for all T ∈ B(X).

(vii) iT (x) = 0 if and only if 0 ∈ σT (x).

3
The next lemma and its proof are quoted in [18].

Lemma 2. Let T, S ∈ B(X). Assume that for every x ∈ X the vector T x belongs
to the linear span of x and Sx. Then T = λI + µS for some λ, µ ∈ C.

Proof. See lemma 2.4 in [18].

The following lemma was introduced in [2] for generalized product of operators,
but in this paper we use another techniques to characterize rank one operators by
the dimension of analytical core of Jordan product of operators.

Lemma 3. For a nonzero operator A ∈ B(X), the following statements are equiv-
alent.

(i) A is a rank one operator.

(ii) dim(K(T A + AT )) ≤ 2 for all T ∈ B(X).

Proof. For A ∈ B(X), we consider the following operator R = AT + T A with


T ∈ B(X) is an arbitrary operator. It is clear that we have (i) =⇒ (ii). For
(ii) =⇒ (i) suppose that Rank(A) ≥ 2, and let us show that there exists T ∈ B(X)
such that dim(K(R)) ≥ 3. We will distinguish two cases.

Case 1. If Rank(A) ≥ 3.
Let y1 , y2 , y3 ∈ X be linearly independent vectors such that y1 = Ax1 , y2 = Ax2
and y3 = Ax3 where x1 , x2 , x3 ∈ X are obviously linearly independent. We will
distinguish four cases.

(1) If span{x1 , x2 , x3 } = span{y1 , y2 , y3 }.


Let T ∈ B(X) be an operator satisfying T y1 = x1 , T y2 = x2 and T y3 = x3 .
As AT = T A = I on span{x1 , x2 , x3 }, then Rx1 = 2x1 , Rx2 = 2x2
and Rx3 = 2x3 . Hence span{x1 , x2 , x3 } ⊂ N (R − 2I) ⊂ K(R), and so
dim(K(R)) ≥ 3.

(2) If dim(span{x1 , x2 , x3 , y1 , y2 , y3 }) = 4.
Let for example span{x1 , x2 , x3 , y1 , y2 , y3 } = span{x1 , y1 , y2 , y3 }. Consider
an operator T ∈ B(X) satisfying T y1 = x1 , T y2 = x2 , T y3 = x3 and
′ ′ ′ ′
T x1 = 0. For x2 = αx1 + βy1 + γy2 + δy3 and x3 = α x1 + β y1 + γ y2 + δ y3
′ ′ ′ ′
where α, β, γ, δ, α , β , γ , δ ∈ C, we get that Rx2 = 2x2 − αx1 and Rx3 =
2x3 − α′ x1 . Since Rx1 = x1 , then RM = M with M = span{x1 , x2 , x3 }.
Hence M ⊂ K(R), therefore dim(K(R)) ≥ 3.

4
(3) If dim(span{x1 , x2 , x3 , y1 , y2 , y3 }) = 5.
Let for example span{x1 , x2 , x3 , y1 , y2 , y3 } = span{x1 , x2 , y1 , y2 , y3 }. Take
an operator T ∈ B(X) satisfying T y1 = x1 , T y2 = x2 , T y3 = x3 and
T x1 = T x2 = 0. For x3 = αx1 + βx2 + y where y ∈ Span{y1 , y2 , y3 } and
α, β ∈ C, we get that Rx1 = x1 , Rx2 = x2 and Rx3 = 2x3 − αx1 − βx2 . It
follows that RM = M with M = span{x1 , x2 , x3 }. Then M ⊂ K(R), thus
dim(K(R)) ≥ 3.

(4) If dim(span{x1 , x2 , x3 , y1 , y2 , y3 }) = 6.
Let T ∈ B(X) be an operator satisfying T y1 = x1 , T y2 = x2 , T y3 = x3 and
T x1 = T x2 = T x3 = 0. Since Rx1 = x1 , Rx2 = x2 and Rx3 = x3 . It follows
that span{x1 , x2 , x3 } ⊂ N (R − I) ⊂ K(R), and so dim(K(R)) ≥ 3.

Case 2. If Rank(A) = 2.
Let y1 and y2 two linearly independent vectors such that y1 = Ax1 , y2 = Ax2 and
Ay1 = ay1 + by2 where a, b ∈ C.
If x1 , y1 , y2 ∈ X are linearly dependent, we can choose u ∈ N (A) such that x1 +
u, y1 , y2 ∈ X are linearly dependent. Now, if x1 + u, x2 , y1 ∈ X are linearly
dependent. As x1 + u and y1 are linearly independent, and N (A) is an infinite-
dimensional subspace of X. Then we can take v ∈ N (A) such that x1 + u, x2 +
v, y1 ∈ X are linearly independent. Thus without loss of generality, we may assume
that (x1 , y1 , y2 ) and (x1 , x2 , y1 ) are linearly independent. Let us distinguish two
cases.

(1) If x2 ∈ span{x1 , y1 , y2 }.
Let x2 = αx1 + βy1 + γy2 where α, β, γ ∈ C. For an operator T ∈ B(X)
satisfying T y1 = x1 , T y2 = x2 and T x1 = 0, we get that

Rx1 = x1

Rx2 = 2x2 − αx1

Ry1 = y1 + ax1 + bx2 .

This yields that RM = M with M = span{x1 , x2 , y1 }. Hence M ⊂ K(R),


thus dim(K(R)) ≥ 3.

(2) If (x1 , x2 , y1 , y2 ) are linearly independent.


Consider an operator T ∈ B(X) satisfying T y1 = x1 , T y2 = x2 and T x1 =

5
T x2 = 0. Hence 
Rx1 = x1

Rx2 = x2

Ry1 = y1 + ax1 + bx2 .

It follows that RM = M with M = span{x1 , x2 , y1 }. Thus M ⊂ K(R), and


so dim(K(R)) ≥ 3. This establishes the lemma.

For Jordan product of operators, we give a characterization for two operators


to be linearly dependent.

Lemma 4. For A and B two operators in B(X), the following statements are
equivalent.

(i) K(AT + T A) = K(BT + T B) for every T ∈ B(X).

(ii) K(AF + F A) = K(BF + F B) for every F ∈ F1 (X).

(iii) B = λA for some nonzero scalar λ ∈ C.

Proof. For A, B ∈ B(X), we consider the following operators R = AF + F A and


S = BF + F B with F ∈ F1 (X) is an arbitrary operator. The implications (i) =⇒
(ii) and (iii) =⇒ (i) are straightforward. We only need to show the implication (ii)
=⇒ (iii). Let us distinguish two cases.

Case 1. (x, Ax, Bx) are linearly independent for some nonzero vector x ∈ X.
Let f be a linear functional in X ∗ such that f (x) = f (Ax) = 0 and f (Bx) = 1.
For F = x ⊗ f , we obtain that
( (
Rx = 0 Sx = x
2
and
RAx = f (A x)x SBx = f (B 2 x)x + Bx.

This yields that R2 = 0, thus K(R) = {0} but x ∈ K(S), which is a contradiction.
Case 2. (x, Ax, Bx) are linearly dependent for all x ∈ X.
Let x be a vector in X. If x and Ax are linearly independent, then we get that
Bx ∈ span{x, Ax}. If not, then Ax = µx where µ ∈ C. Suppose that Bx and x
are linearly independent. It follows that there exists a linear functional f ∈ X ∗
such that f (x) = 0 and f (Bx) = 1. For T = x ⊗ f we obtain that Rx = 0
and Sx = x. Thus x ∈ K(S) = K(R) = {0}. This contradiction shows that
Bx ∈ span{x} ⊂ span{x, Ax}. We conclude that Bx ∈ span{x, Ax} for all

6
x ∈ X. Lemma 2 implies that B = λA + αI where α, λ ∈ C. It is clear that we
have
A = 0 if and only if B = 0
and also
A ∈ CI if and only if B ∈ CI.
Now, if A ∈
/ CI, assume that α 6= 0. For that we distinguish two cases.

(1) If (x, Ax, A2 x) are linearly independent for some x ∈ X.


Take F = x⊗f such that f (x) = 1 and f (Ax) = f (A2 x) = 0. Hence R2 = 0,
thus K(R) = {0}. On the other hand, we have K(S) = span{λAx + 2αx},
this is a contradiction.

(2) If (x, Ax, A2 x) are linearly dependent for all x ∈ X.


It is obvious that we have A2 x ∈ span{x, Ax} for all x ∈ X. It follows by
Lemma 2 that A2 = aA + bI where a, b ∈ C. Without loss of generality, we
may assume that λ = 1. Since A 6∈ CI. Then there exists a nonzero vector
x ∈ X such that Ax and x are linearly independent. Consider an operator
F = x ⊗ f such that f (x) 6= 0 and f (Ax)2 = f (x)f (A2 x), then we obtain
( (
Rx = f (Ax)x + f (x)Ax Sx = f (Bx)x + f (x)Bx
2
and
RAx = f (A x)x + f (Ax)Ax SAx = f (BAx)x + f (Ax)Bx.

Hence

Rx = f (Ax)x + f (x)Ax
(
Sx = f (Ax)x + f (x)Ax + 2αf (x)x
f (Ax) and
RAx = Rx SAx = (f (A2 x) + αf (Ax))x + f (Ax)(Ax + αx).
f (x)

Thus (
Sx = Rx + 2αf (x)x
SAx = f (A2 x)x + f (Ax)Ax + 2αf (Ax)x,
and so 
Sx = Rx + 2αf (x)x
f (Ax)
SAx = Sx.
f (x)

• If f (Ax) = 0, we have R2 = 0 and K(R) = {0}. Since Sx = f (x)Ax +


2αf (x)x and SAx = 0, then S 2 x = 2αf (x)Sx. This yields that Sx ∈
K(S) = K(R) = {0}, contradiction.

7
• Now, if f (Ax) 6= 0, we have K(R) = span{Rx} and K(S) = span{Sx}.
Hence there exists a nonzero scalar δ ∈ C such that Sx = δRx.
This implies that 2αf (x)x = (δ − 1)Rx. As x and Ax are linearly
independent, then δ = 1, contradiction.

Finally, we get that α = 0 and the proof is complete.

3 Main Result
We start this section with the following theorem that gives a characterization of
maps preserving the analytic core of the Jordan product of operators.
Theorem 1. Let φ : B(X) −→ B(X) be a surjective map. Then the following
assertions are equivalent:
(i) For every T, S ∈ B(X), we have

K(φ(T )φ(S) + φ(S)φ(T )) = K(T S + ST )

(ii) There is a map γ : B(X) −→ C\{0} such that φ(T ) = γ(T )T for all T ∈
B(X).
Proof. Clearly, we have (ii) =⇒ (i). It remains to show that (i) =⇒ (ii). We
divide the proof into several steps.

Step 1. For any R ∈ B(X), φ(R) = 0 if and only if R = 0.

Let us show that φ(0) = 0. We have

K(φ(T )φ(0) + φ(0)φ(T )) = K(T 0 + 0T )


= {0}
= K(φ(T )0 + 0φ(T )).

As φ is surjective, then Lemma 4 implies that φ(0) = 0.


Now, assume that φ(R) = 0 for an operator R ∈ B(X). Hence

K(RT + T R) = K(φ(R)φ(T ) + φ(T )φ(R))


= {0}
= K(0T + T 0).

From Lemma 4, we infer that R = 0.

8
Step 2. For any operator F ∈ B(X), φ(F ) ∈ F1 (X) if and only if F ∈ F1 (X).

Let F ∈ B(X) such that φ(F ) is a rank one operator. Step 1 implies that F 6= 0.
By Lemma 3, we obtain that

dim(K(Sφ(F ) + φ(F )S)) ≤ 2

for all S ∈ B(X). As φ is surjective, then

dim(K(T F + F T )) = dim(K(φ(T )φ(F ) + φ(F )φ(T ))) ≤ 2

for all T ∈ B(X). It follows from Lemma 3 that F is a rank one operator. Just as
before, we obtain in the same way that if F ∈ F1 (X) then φ(F ) ∈ F1 (X).

Step 3. There is a nonzero scalar λF ∈ C such that φ(F ) = λF F for all


F ∈ F1 (X). For that we discuss two cases, if F is nilpotent or not.

(1) There is a nonzero scalar λP such that φ(P ) = λP P , for every rank one
idempotent operator P ∈ F1 (X).
Let x ∈ X and f ∈ X ∗ such that f (x) = 1. Step 2 implies that there
exist a nonzero vector y ∈ X and a linear functional g ∈ X ∗ such that
φ(x ⊗ f ) = y ⊗ g. Since x ⊗ f is a rank one idempotent operator, then

span{x} = K(x ⊗ f )
= K(x ⊗ f x ⊗ f + x ⊗ f x ⊗ f )
= K(y ⊗ gy ⊗ g + y ⊗ gx ⊗ g)
= K(2g(y)y ⊗ g).

This yields that g(y) 6= 0 and span{y} = K(2g(y)y ⊗ g) = span{x}. There-


fore y = αx for a nonzero scalar α ∈ C. Without loss of generality, we may
and shall assume that α = 1, then we get that φ(x ⊗ f ) = x ⊗ gx,f for certain
linear functional gx,f ∈ X ∗ . We claim that f and gx,f are linearly depen-
dent. If not, take a nonzero vector z ∈ X such that x and z are linearly
independent in X, with f (z) = 1 and gx,f (z) = 0. Just as before, one shows
that there exists a linear functional gz,f ∈ X ∗ such that φ(z ⊗ f ) = z ⊗ gz,f .
Observe that

(x ⊗ f z ⊗ f + z ⊗ f x ⊗ f )(x + z) = 2(x + z).

Then
(x + z) ∈ K(x ⊗ f z ⊗ f + z ⊗ f x ⊗ f ).

9
On the other hand, we have
{0} = K(gz,f (x)z ⊗ gx,f )
= K(x ⊗ gx,f z ⊗ gz,f + z ⊗ gz,f x ⊗ gx,f )
= K(φ(x ⊗ f )φ(z ⊗ f ) + φ(z ⊗ f )φ(x ⊗ f ))
= K(x ⊗ f z ⊗ f + z ⊗ f x ⊗ f ),
which is a contradiction. Hence gx,f and f are linearly dependent, thus
φ(x ⊗ f ) = αx ⊗ f for some nonzero scalar α ∈ C.
(2) There is a nonzero scalar λN such that φ(N ) = λN N , for every rank one
nilpotent operator N ∈ F1 (X).
Let x ∈ X and f ∈ X ∗ such that f (x) = 0. Set T = x ⊗ f , then φ(T ) = y ⊗ g
with g(y) = 0. Assume that x and y are linearly independent, and so let
z ∈ X and h ∈ X ∗ such that f (z) = h(x) = 1, h(z) 6= 0 and h(y) = 0. It
follows that
{0} = K(g(z)y ⊗ h)
= K(y ⊗ gz ⊗ h + z ⊗ hy ⊗ g)
= K(φ(x ⊗ f )φ(z ⊗ h) + φ(z ⊗ h)φ(x ⊗ f ))
= K(x ⊗ f z ⊗ h + z ⊗ hx ⊗ f ).
Observe that
(x ⊗ f z ⊗ h + z ⊗ hx ⊗ f )x = x,
hence
span{x} ⊂ K(x ⊗ f z ⊗ h + z ⊗ hx ⊗ f ).
This contradiction shows that x and y are linearly dependent. As before,
without loss of generality, we may assume that φ(x⊗f ) = x⊗gx,f for certain
linear functional gx,f ∈ X ∗ .
Suppose that f and g are linearly independent, and let z ∈ X and h ∈ X ∗
such that f (z) = 1, g(z) = 0, h(x) = 1 and h(z) 6= 0. Since
(x ⊗ f z ⊗ h + z ⊗ hx ⊗ f )x = x,
then
span{x} ⊂ K(x ⊗ f z ⊗ h + z ⊗ hx ⊗ f ).
On the other hand, we get that
{0} = K(z ⊗ g)
= K(x ⊗ gz ⊗ h + z ⊗ hx ⊗ g)
= K(φ(x ⊗ f )φ(z ⊗ h) + φ(z ⊗ h)φ(x ⊗ f ))
= K(x ⊗ f z ⊗ h + z ⊗ hx ⊗ f ).

10
Then, this contradiction shows also that f and g are linearly dependent.
Thus φ(x ⊗ f ) = βx ⊗ f for some nonzero scalar β ∈ C.
Now, if f (x) ∈ C \ {0, 1}, obviously this can be obtained as in (1) of Step 3.
Finally, there is a nonzero scalar λF ∈ C such that φ(F ) = λF F for all rank
one operator F ∈ F1 (X).

Step 4. There exists a map γ : B(X) −→ C\{0} such that φ(T ) = γ(T )T for all
T ∈ B(X).

For every F ∈ F1 (X) and every T ∈ B(X), we have


K(T F + F T ) = K(φ(T )φ(F ) + φ(F )φ(T )).
By using step 3, we get that
K(φ(T )φ(F ) + φ(F )φ(T )) = K(λF (φ(T )F + F φ(T )))
= K(φ(T )F + F φ(T )).
Then K(T F + F T ) = K(φ(T )F + F φ(T )), for all operator F ∈ F1 (X). Lemma
4 implies that φ(T ) and T are linearly dependent. Hence, there exists a map
γ : B(X) −→ C\{0} such that φ(T ) = γ(T )T for all T ∈ B(X). The proof is then
complete.
As a consequence of theorem 1, we get the following corollary.
Corollary 1. Let φ : B(X) −→ B(X) be a surjective map. Then the following
assertions are equivalent.
(i) iφ(T )φ(S)+φ(S)φ(T ) (x) = 0 ⇐⇒ iT S+ST (x) = 0 for every T, S ∈ B(X) and
x ∈ X.
(ii) There exists a map γ : B(X) −→ C\{0} such that φ(T ) = γ(T )T for all
T ∈ B(X).
Proof. The implication (ii) =⇒ (i) is straightforward.
(i) =⇒ (ii) From theorem 1, and the assertions (vi) and (vii) in lemma 1, we get
the desired form of φ.

References
[1] Z. Abdelali, A. Achchi and R. Marzouki, Maps preserving the local
spectrum of skew-product of operators, Linear Algebra and its Applications
485 (2015), 58-71.

11
[2] A. Achchi, Maps preserving the inner local spectral radius zero of generalized
product of operators, Rendiconti del Circolo Matematico di Palermo Series 2
68 (2018), 355-362.

[3] P. Aiena, Abstract Fredholm theory, Fredholm and Local Spectral Theory
with Applications to Multipliers (2007 ), 239-308.

[4] A. Bourhim, Surjective linear maps preserving local spectra, Linear Algebra
and its Applications 432 (2010), 383-393.

[5] A. Bourhim and M. Mabrouk, Maps preserving the local spectrum of Jor-
dan product of matrices, Linear Algebra and its Applications 484 (2015),
379-395.

[6] A. Bourhim and M. Mabrouk, Maps preserving the local spectrum of Jor-
dan product, Studia mathematica 234 (2016), 97-120.

[7] A. Bourhim and J. Mashreghi, Local Spectral Radius Preservers, Integral


Equations and Operator Theory 76 (2013), 95-104.

[8] A. Bourhim and J. Mashreghi, Maps preserving the local spectrum of


triple product of operators, Linear and Multilinear Algebra 63 (2014), 765-
773.

[9] A. Bourhim and J. Mashreghi, A survey on preservers of spectra and local


spectra, Invariant Subspaces of the Shift Operator Contemporary Mathemat-
ics (2015), 45-98.

[10] A. Bourhim and J. Mashreghi, Maps Preserving The Local Spectrum Of


Product Of Operators, Glasgow Mathematical Journal 57 (2014), 709-718.

[11] A. Bourhim and V. G. Miller, Linear maps on Mn(C) preserving the


local spectral radius, Studia Mathematica 188 (2008), 67-75.

[12] A. Bourhim and T. Ransford, Additive Maps Preserving Local Spectrum,


Integral Equations and Operator Theory 55 (2005), 377-385.

[13] C. Costara, Continuous maps preserving local spectra of matrices, Linear


Algebra and its Applications 492 (2016), 1-8.

[14] C. Costara, Local spectrum linear preservers at non-fixed vectors, Linear


Algebra and its Applications 457 (2014), 154-161.

[15] M. Elhodaibi and A. Jaatit, Inner local spectral radius preservers, Ren-
diconti del Circolo Matematico di Palermo Series 2 67 (2017), 215-225.

12
[16] M. E.-C. E. Kettani and H. Benbouziane, Additive maps preserving op-
erators of inner local spectral radius zero, Rendiconti del Circolo Matematico
di Palermo 63 (2014), 311-316.

[17] T. Jari, Nonlinear maps preserving the inner local spectral radius, Rendiconti
del Circolo Matematico di Palermo 64 (2015), 67-76.

[18] C.-K. Li, P. Šemrl, and N.-S. Sze, Maps preserving the nilpotency of
products of operators, Linear Algebra and its Applications 424 (2007), 222-
239.

[19] K. B. Laursen and M. Neumann, An Introduction to Local Spectral The-


ory, Oxford University Press (2000).

[20] T. Miller, V. Miller and M. Neumann, Local spectral properties of


weighted shifts, Journal of Operator Theory 51 (2004), 71-88.

13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy