A Study On Simple and Semi-Simple Artinian Rings
A Study On Simple and Semi-Simple Artinian Rings
Table of Contents
Abstract iii
Acknowledgements iv
1 Introduction 1
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 R-modules 8
2.1 R-module homomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Submodule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Wedderburn-Artin Theorems 42
5.1 Semi-simple Artinian Rings . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2 Simple Artinian Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6 Implications 53
Bibliography 61
1
Chapter 1
Introduction
and idempotence, which prove to be of vast help. Once we have the necessary theorems,
we discover the structure of semi-simple Artinian rings. In a turn of events, it becomes
apparent that the structure of semi-simple Artinian rings depends on the structure of
simple Artinian rings. Accordingly, we embark on our last mission: to find the structure
of simple Artinian rings.
After everything is said and done, we will explore concrete examples on how
some semi-simple Artinian rings break down. Maschke’s Theorem reveals how one can
easily create semi-simple Artinian rings. Once we get our hands on a semi-simple Artinian
ring, we will decompose it into a direct sum of simple Artinian rings.
This work is inspired primarily by Herstein’s Noncommutative Rings, [Her]. For-
mal definitions are taken mostly from Hungerford’s Algebra, [Hun], as a strong foundation
for Ring Theory. Nevertheless, this paper is written so that no extra knowledge is nec-
essary to understand this work other than the bare bones of abstract algebra. Although
we shall consider every theorem in the noncommutative, nonunital way, this paper has
been written with examples that are welcoming to those of us who are used to having
elements commute and multiplicative identities!
1.1 Preliminaries
Before we begin, we should become accustomed with a few results from Ring
Theory. We assume the reader has some previous knowledge on both Ring Theory and
Group Theory. Still, we will quickly review a few theorems and lemmas that will fre-
quently be called upon throughout this paper. Refer to [Hun] if needed for a more in
depth review of Group Theory and Ring Theory.
A common theme throughout many of the proofs in this paper will be dealing
with the sets A + B and AB, which we define as follows:
Lemma 1.1.1. If A, B are (left, right, two-sided) ideals of R, then both A + B and AB
are (left, right, two-sided) ideals of R.
3
Proof. We prove the case for left ideals A, B of R. First, since 0 ∈ A and 0 ∈ B, then
0 + 0 = 0 ∈ A + B and 0 · 0 = 0 ∈ AB. Second, Let x ∈ A + B with x = a + b for
P
some a ∈ A and b ∈ B, and let y ∈ AB with y = i,j ai bj for some ai ∈ A and bj ∈ B.
Since a, ai ∈ A for all i and b ∈ B, where A and B are left ideals, then we know −a,
−ai ∈ A and −b ∈ B for all i. Hence, (−a) + (−b) = −(a + b) = −x ∈ A + B and
0
P P P
i,j (−ai )bj = i,j −(ai bj ) = −( i,j ai bj ) = −y ∈ AB. Third, suppose x, x ∈ A + B
with x = a + b and x0 = a0 + b0 for some a, a0 ∈ A and b, b0 ∈ B, and suppose y,
y 0 ∈ AB with y = i,j ai bj and y 0 = i,j a0i bj for some ai , a0i ∈ A and all bj ∈ B. Then,
P P
0
P
i,j (ai + ai )bj ∈ AB. Finally, let r ∈ R, x ∈ A + B with x = a + b for some a ∈ A
P
and b ∈ B, and y ∈ AB with y = i,j ai bj for some ai ∈ A and some bj ∈ B. Then,
P P
rx = r(a + b) = ra + rb ∈ A + B and ry = r( i,j ai bj ) = i,j (rai )bj ∈ AB on account
of A, B being ideals of R. Hence, both A + B and AB are left ideals of R.
Instinctively, we can define sums and products of ideals with more than two sum-
mands using the appropriate analogues and still have the previous result hold. Specifically,
P
A1 +· · ·+An = {a1 +· · ·+an | ai ∈ Ai , i = 1, . . . , n} and A1 · · · An = { j a1j · · · anj | aij ∈
Ai , i = 1, . . . , n} will be ideals of R provided A1 , . . . , An are ideals of R. Consequently, if
A is an ideal of R, then the set An = { a1 · · · an | ai ∈ A} (= A · · · A n-fold) is an ideal
P
of R.
Since we have established some facts about ideals, we now draw our focus on
some facts about ring homomorphisms.
The following lemma is a useful fact to have. Not only will it help prove the
next theorem, but it will also come in handy for future theorems.
Proof. Let f : R → S be a ring homomorphism and J be any ideal of S. Consider the set
f −1 (J) = {r ∈ R | f (r) ∈ J}. Since f (0R ) = 0S and 0S ∈ J, then 0R ∈ f −1 (J). Moreover,
if a ∈ f −1 (J), then f (−a) = −f (a) ∈ J, so −a ∈ f −1 (J). Also, if a, b ∈ f −1 (J), then
f (a + b) = f (a) + f (b) ∈ J on since f (a) and f (b) belong to J, an additive group. Thus
a + b ∈ f −1 (J). Lastly, let r ∈ R and i ∈ f −1 (J), then note that f (ri) = f (r)f (i)
and f (ir) = f (i)f (r). Since f (r) ∈ S and f (i) ∈ J, where J is a two-sided ideal, then
6
both f (ri) and f (ir) belong to J. Thus, for any r ∈ R and any i ∈ f −1 (J), both ri
and ir belong to f −1 (J). Hence, f −1 (J) is an ideal of R. Lastly, let x ∈ Ker f , then
f (x) = 0S ∈ J. In other words, x ∈ f −1 (J), meaning Ker f ⊂ f −1 (J).
Proof. Let I be an ideal of R and let J/I be any ideal of R/I. Recall that the map
π : R → R/I given by r 7→ r + I is a surjective homomorphism of rings with kernel I.
By the previous lemma, we have that both φ−1 (J/I) is an ideal of R and I ⊂ π −1 (J/I).
If x ∈ J, then π(x) ∈ J/I, meaning that x ∈ π −1 (J/I). Hence, J ⊂ π −1 (J/I). Now, if
x ∈ π −1 (J/I), then we know that π(x) ∈ J/I. Yet, if x + I ∈ J/I, then x ∈ J. Hence, we
have π −1 (J/I) ⊂ J. Therefore, π −1 (J/I) = J, meaning J is an ideal of R that contains
I.
To show uniqueness, we will show that π(π −1 (J/I)) = J/I and π −1 (π(J)) = J.
Let x ∈ π(π −1 (J/I)), then x = π(r), where r ∈ π −1 (J/I). Since r ∈ π −1 (J/I), then
π(r) ∈ J/I. Since x = π(r), then x ∈ J/I. Hence, π(π −1 (J/I)) ⊂ J/I. Now, suppose x ∈
J/I. Then, x = j + I, for some j ∈ J. Since π(j) = x ∈ J/I, then j ∈ π −1 (J/I). Lastly,
since x = π(j) and j ∈ π −1 (J/I), then x ∈ π(π −1 (J/I)). Thus, J/I ⊂ π(π −1 (J/I)),
meaning π(π −1 (J/I)) = J/I. Next, suppose x ∈ J. Then, π(x) ∈ π(J). However, if
π(x) ∈ π(J), then x ∈ π −1 (π(J)). So, J ⊂ π −1 (π(J)). On the other hand, suppose
x ∈ π −1 (π(J)). If so, then π(x) ∈ π(J). Therefore, for some j ∈ J, π(x) = π(j).
Consequently, since π is a ring homomorphism, π(x) − π(j) = π(x − j) = 0, meaning
x − j ∈ Ker π = I. Since I ⊂ J, then we have x − j ∈ J. If x − j = j 0 , for some
j 0 ∈ J, then we have x = j + j 0 ∈ J, since J is an ideal of R. Hence, x ∈ J, meaning
π −1 (π(J)) ⊂ J and thus π −1 (π(J)) = J.
7
Now, while we assume the reader is familiar with a field, we will define what is
meant by a division ring. Since we will not assume commutativity in any proof, division
rings will be abundant in this paper. One way to visualize a division ring is simply by
thinking of a field without commutativity.
The next part our preliminaries will consider is the ring of n × n matrices over a
division ring. In particular, we will see that the ring of endomorphisms over vector spaces
will be of great importance. As one can guess, we will see the relation between the ring
of endomorphisms of a vector space V over a division ring D with dimension n and the
ring of n × n matrices over D. Nevertheless, for the purposes of brevity, we will define
what we mean by the ring of n × n matrices over a division ring.
Definition 1.1.3. We will use M atn (D) to denote the ring of n×n matrices over a
division ring D.
Finally, we will use Zorn’s lemma and the axiom of choice for some key theorems.
Zorn’s lemma will aid us in proving a familiar theorem about maximal ideals in a ring
with a new twist. As for the axiom of choice, having it at our disposal will grant us an
alternative definition for an Artinian ring, which will be used throughout the second half
of the paper. So, it is highly recommended that the reader become familiar with these
two statements.
Axiom of Choice. Let C be a collection of nonempty sets. Then we can choose a member
from each set in that collection.
Zorn’s Lemma. Every non-empty partially ordered set in which every chain has an
upper bound contains at least one maximal element.
8
Chapter 2
R-modules
In order to identify the structure of simple and semi-simple rings, the idea of
R-modules is fundamental. Regardless, R-modules on their own are a rich subject to
study. We shall cover the basics of modules before going deeper into Ring Theory topics.
Without further ado, let us observe the most important definition of this paper:
(i) r(a + b) = ra + rb
(ii) (r + s)a = ra + sa
(iv) 1R a = a, ∀a ∈ A
we have been making thus far are well-founded. On the other hand, with the assumption
that R is merely a ring and A some arbitrary additive abelian group, many of the linear
algebra theorems are put on hold.
From the definition, we can contemplate the fact than the product of a ring
element with a module element is another module element. That is, once a ring ele-
ment makes contact with a module element, we are talking about element in the module
thereafter. Let us establish some basic properties of these module elements.
Now, if A is an R-module, 0A is the additive identity of A, and 0R is the additive
identity of R, then ∀r ∈ R, ∀a ∈ A:
r0A = 0A and 0R a = 0A
Proof. Since A is an R-module, then we know that if r ∈ R, then r0A ∈ A. let b = r0A ,
then b = r0A = r(0A + 0A ) = r0A + r0A = b + b. Since b ∈ A, −b ∈ A. Hence,
0A = b + (−b) = b + b + (−b) = b, so b = 0A , or r0A = 0A as desired. Similarly, if a ∈ A,
then 0R a ∈ A. Let c = 0R a, then c = 0R a = (0R + 0R )a = 0R a + 0R a = c + c, then
0A = c + (−c) = c + c + (−c) = c = 0R a.
Another notion that we will need to establish before we move on is the handling
of additive inverses with respect to R-modules. That is, establishing the fact that ∀r ∈ R,
∀a ∈ A:
Proof. All that is necessary is to show that both (−r)a and r(−a) are the additive inverses
of ra. Simple, since ra+(−r)a = (r +(−r))a = 0R a = 0A and ra+r(−a) = r(a+(−a)) =
r0A = 0A . A is abelian, thus adding the elements to the left will be the same as adding
them from the right. Since additive inverses are unique, then (−r)a = −(ra) = r(−a).
There are many examples that show the idea of R-modules is not too foreign to
the rest of ring theory. We will observe examples that show how natural R-modules can
exists. Here are some examples of R-modules.
Example 2.0.3. If R is a ring, then R[x] (the ring of polynomials in x with coefficients
in R) is an R-module with the scalar multiplication for functions.
Example 2.0.5. Let I be a left ideal of R. The additive quotient group R/I is not quite
a ring. Still, with the action of R on R/I as r(s + I) = rs + I, R/I is an R-module. This
action is well-defined, for if a + I, b + I ∈ R/I and a + I = b + I, then for any r ∈ R we
have that a − b ∈ I, meaning r(a − b) = ra − rb ∈ I, and thus ra + I = rb + I.
Example 2.0.6. If R is a ring, then M atn (R) is an R-module with the scalar multipli-
cation r(aij ) = (raij ) for any (aij ) ∈ M atn (R).
Example 2.1.2. If R is a ring, then the mapping φ : R[x] → R[x] with φ(f (x)) = x(f (x))
(left multiplication by x) for all f ∈ R[x] is an R-module homomorphism, but not a ring
homomorphism.
11
Proposition 2.1.1. If A and B are R-modules, then HomR (A, B) (the set of all R-
module homomorphisms A → B) is an abelian group with f + g given on a ∈ A by
(f + g)(a) = f (a) + g(a). Additive identity is the zero map.
Proof. HomR (A, B) 6= ∅ since B being an R-module implies B is an abelian group that
has the additive identity 0. Consider the function f : A → B with f (a) = 0, ∀a ∈ A. The
function f is a homomorphism since f (a + b) = 0 = 0 + 0 = f (a) + f (b), ∀a, b ∈ A. Also,
f (ra) = 0 = r0 = rf (a), hence f belongs in HomR (A, B). Closure is given by having
defined addition of functions as pointwise addition. Meaning, if f, g ∈ HomR (A, B),
then ∀a ∈ A, (f + g)(a) ∈ HomR (A, B) since (f + g)(a) = f (a) + g(a) ∈ B. Moreover,
(f + g)(a + b) = f (a + b) + g(a + b) = f (a) + g(a) + f (b) + g(b) = (f + g)(a) + (f + g)(b)
and (f + g)(ra) = f (ra) + g(ra) = rf (a) + rg(a) = r(f (a) + g(a)) = r((f + g)(a)).
Next, if f ∈ HomR (A, B), consider g : A → B with g(a) = −f (a), ∀a ∈ A. Since
(f + g)(a) = f (a) + g(a) = f (a) + (−f (a)) = 0, ∀a ∈ A, then f + g is the zero map,
meaning g is the additive inverse of f . Still, we must show g ∈ HomR (A, B). Clearly,
g(a + b) = −(f (a + b)) = −(f (a) + f (b)) = (−f (a)) + (−f (b)) = g(a) + g(b), so g is
a homomorphism. Similarly, g(ra) = −(f (ra)) = −(rf (a)) = r(−f (a)) = rg(a), so
g ∈ HomR (A, B). Since we are seeing HomR (A, B) as a group under addition, it is
understood that the operation is associative.
Proposition 2.1.2. HomR (A, A) is a ring with identity, where multiplication is compo-
sition of functions.
Proof. If A is an R-module, them we have already established why HomR (A, A) is a group
under addition. To make sense of HomR (A, A) as a ring, we will define multiplication of
two functions of HomR (A, A) to be their composition of functions. Hence, to prove that
HomR (A, A) is closed under multiplication, we will show that if f, g ∈ HomR (A, A), then
f ◦ g ∈ HomR (A, A). Let f, g ∈ HomR (A, A) and let a, b ∈ A. Then, (f ◦ g)(a + b) =
f (g(a + b)) = f (g(a) + g(b)) = f (g(a)) + f (g(b)) = (f ◦ g)(a) + (f ◦ g)(b). Hence,
f ◦ g is a homomorphism. Now, let f, g ∈ HomR (A, A), a ∈ A, and r ∈ R. To show
f ◦ g is linear over R, and thus and R-module homomorphism, consider (f ◦ g)(ra):
(f ◦ g)(ra) = f (g(ra)) = f (rg(a)) = rf (g(a)) = r(f ◦ g)(a). Hence, HomR (A, A) is
closed under multiplication, and thus a ring. HomR (A, A) has the map i : A → A, with
i(a) = a, ∀a ∈ A as the multiplicative identity: f (i(a)) = f (a) = i(f (a)), ∀a ∈ A, so
f ◦ i = f = i ◦ f.
12
Recall, HomR (A, A) is denoted as EndR (A). As we have shown, EndR (A) is
a ring with composition of functions as multiplication. One thing that has been hiding
in the background is that the elements of HomR (A, B) act on the elements of A. Even
more interesting than that is how EndR (A) has the necessary power over the elements
of A to satisfy the R-module axioms. In particular, EndR (A) is a unitary ring, where
as HomR (A, B) is just an additive group. Moreover, the “product” of an element of
EndR (A) with an element of A necessarily belongs to A, unlike HomR (A, B). Additively,
while HomR (A, B) would satisfy parts (i) and (ii) from the definition of R-modules,
EndR (A) undoubtedly satisfies (i)-(iv). In light of this entire paragraph, we can claim
the following proposition.
Proposition 2.1.3. A is a left EndR (A)-module with f a = f (a), for any f ∈ EndR (A)
and a ∈ A.
As we have seen, V a vector space over a division ring D is a special case in the
topic of modules. Nevertheless, EndD (V ) is certainly a ring as we have just shown. Of
course, EndD (V ) is still the set of linear transformations from V onto itself. That being
said, we will recall the connecting between the set of D-linear transformations from V
onto itself and the ring of n × n matrices over D. While we know these are isomorphic
when D is a field, we will observe the slight change if we restrict ourselves to division
rings.
Definition 2.1.2. Let R be a ring. Then, the opposite ring of R, denoted Rop , is a ring
with the same elements as R, the same addition as R, and multiplication “ ·” given by
a · b = ba
Proof. Let V be an n-dimensional vector space over the division ring D with basis B =
{v1 , . . . , vn }. Define Φ : EndD (V ) → M atn (Dop ) with Φ(f ) = (aij ), where f (vd ) =
P
c acd vc . To prove Φ is injective, suppose Φ(f ) = Φ(g). If so, then since every entry in
both matrices agree, we have that f (vi ) = g(vi ) for i ∈ {1, . . . , n}. Since both f and g
13
agree in outputs for the basis vectors, then it must be that f = g. Now, for surjective,
suppose A ∈ M atn (Dop ). Let u1 , . . . , un be the column vectors of A in V with respect
to B, and define h : V → V to be a D-linear mapping that sends h(vi ) = ui for all
i. Since h is a D-linear transformation from V to itself, we have that h ∈ EndD (V ).
Since h ∈ EndD (V ) and Φ(h) = A, Φ is surjective. Now, suppose f , g ∈ Endn (D) and
P P
consider Φ(f +g). Since we have that (f +g)(vd ) = f (vd )+g(vd ) = c acd vc + c bcd vc =
P
c (acd + bcd )vc , then we can see that the i-jth entry for Φ(f + g) is precisely aij + bij .
Hence, Φ(f +g) = (aij +bij ) = (aij )+(bij ) = Φ(f )+Φ(g). Now, for multiplication, we need
to show that Φ(f ◦ g) = Φ(f )Φ(g), where ◦ is composition of functions (the multiplication
P P
of EndD (V )). Relative to the basis B, say f (vk ) = i aik vi and g(vj ) = k bkj vk , for
i, j, k ∈ {1, . . . , n}. If Φ(f ) = A and Φ(g) = B, we can say that the aik are the
entries of A and bkj are the entries of B. Moreover, since A, B ∈ M atn (Dop ), then
the i-jth entry of AB is l (ail · blj ), where · is the operation of Dop as described in
P
P
Definition 2.1.2. On that note, we can see that (f ◦ g)(vj ) = f (g(vj )) = f ( k bkj vk ) =
P P P P P P P
k bkj f (vk ) = k bkj ( i aik vi ) = i ( k bkj aik )vi = i ( k (aik · bkj ))vi . In other
P
words, ( k (aik · bkj )) is the i-jth entry of the matrix corresponding to Φ(f ◦ g). As we
P
stated before, ( k (aik · bkj )) is the i-jth entry of the matrix AB. Hence, we have that
Φ(f ◦ g) = AB = Φ(f )Φ(g). Thus, Φ is an isomorphism of rings between EndD (V ) and
M atn (Dop ).
2.2 Submodule
Example 2.2.2. Recall, Q is an additive subgroup of R. Since both are abelian groups,
they are Z-modules. Hence, Q is a submodule of R over Z.
Example 2.2.3. Consider the ring M at2 (Q). Of course, M at2 (R) is a M at2 (Q)-module.
Consider the subgroup S = {(aij ) ∈ M at2 (R) | aij = 0 if i 6= j} of M at2 (R). Even though
S is a subgroup of M at2 (R), S is not a submodule of M at2 (R) under M at2 (Q).
2.3 Simple
The word “simple” will be the most recurring word in this paper. In order to
be in the same page, we will define “simple” in the context of both modules and rings.
As we will see, these are not necessarily the same idea.
Suppose you have a ring R that is simple. Since R2 6= (0) and R is its own an
R-module, the definitions for simple modules and simple rings agree thus far. However,
a ring with no ideals could still have nontrivial left (or right) ideals, thus R could have
nontrivial proper submodules.
Example 2.3.1. If D is a division ring, then D has no nontrivial, proper left, right, or
two-sided ideals. Recall that if I is a nonzero ideal of D, and if a 6= 0 is an element of I,
then there exists b ∈ D so that ba = 1D ∈ I, meaning D = D1D ⊂ I. Hence, we can see
that D is both a simple ring and a simple D-module.
15
Proof. Let D be a division ring and suppose I is an ideal of M atn (D) such that I 6= (0);
we will show I = M atn (D). Let A = (aij ) be a nonzero matrix of I. Since A is nonzero,
without loss of generality suppose the entry amk 6= 0. Let the matrix Exy ∈ M atn (D)
be one where the entry exy = 1 and all other entries are zero. If so, then it is easy
to check that Elm AEkl = amk Ell for l = 1, 2, . . . , n. Moreover, since A ∈ I, an ideal,
then amk Ell ∈ I for l = 1, 2, . . . , n. Furthermore, if we let E ∈ M atn (D) be the
identity matrix, then we can see that nl=1 amk Ell = amk E belongs to I. Of course,
P
((amk )−1 E)(amk E) = E, so amk E is a unit of M atn (D) that belongs in I. Hence,
I = M atn (D).
Example 2.3.2. Let D be a division ring and let R = M atn (D) with n > 1. For each k
so that 1 ≤ k ≤ n, Ik = {(aij ) ∈ R | aij = 0 for j 6= k} is a simple left R-module (a quick
proof of this is given in Example 4.3.4). Note that while R is a simple ring, the existence
of left, right ideals shows R is not a simple R-module.
Definition 2.3.2. A left ideal I of a ring R is said to be a minimal left ideal if I 6= (0)
and for every left ideal J such that (0) ⊂ J ⊂ I, either J = (0) or J = I
Example 2.3.3. A left ideal I of R such that RI 6= (0) is a simple left R-module if and
only if I is a minimal left ideal.
Proposition 2.3.2. Every simple left R-module is cyclic; in fact, if A is simple, then
A = Ra, ∀a ∈ A, (a 6= 0).
Proof. Let A be a simple left R-module and let a ∈ A, a 6= 0. First, we will show that Ra
is a subgroup, and then a submodule, of A. Ra = {ra | r ∈ R} is a subset of A since a ∈ A
and A is an R-module implies that ∀r ∈ R, ra ∈ A. First, 0R ∈ R, so 0R a = 0A ∈ Ra,
so Ra is non-empty. Second, if ra, sa ∈ Ra, then ra + sa = (r + s)a ∈ Ra, meaning
there is closure in Ra. Third, if ra ∈ Ra, then r ∈ R, so −r ∈ R, so (−r)a ∈ Ra,
meaning −ra ∈ Ra. Since Ra has identity, closure, and inverses, then Ra is a subgroup
of A. For submodule, we must know show that ∀r ∈ R, x ∈ Ra, rx ∈ Ra. Of course, if
x ∈ Ra, then x = sa, for some s ∈ R. So, rx = r(sa) = (rs)a ∈ Ra, meaning Ra is a
submodule. Since Ra is a submodule of the simple R-module A, then by Definition 2.3.1
either Ra = A or Ra = (0). Now, consider B = {b ∈ A | Rb = (0)}. Clearly 0A ∈ B since
16
The following theorem is the most powerful result from R-module homomor-
phism and simple R-modules: Schur’s lemma. While it has many forms throughout
abstract algebra, we shall focus on the module theory version.
Chapter 3
Now that we have familiarized ourselves with the basics of R-modules, we will
finally lean more towards ring theory. While we covered the definition of a simple ring, we
have yet to mention the definition of a semi-simple ring. The reason for withholding the
definition so far is that the definition for a semi-simple ring depends on what is known as
the Jacobson radical of a ring. Consequently, before understanding the Jacobson radical
we needed to dig into R-modules beforehand. Regardless, there are still a few notions we
must cover before jumping into the Jacobson radical.
Definition 3.1.1. A left ideal I of a ring R is left regular if there exists a ∈ R such that
r − ra ∈ I for every r ∈ R. Similarly, a right ideal J of R is right regular if there exists
b ∈ R such that r − br ∈ J for every r ∈ R.
While the definition for regular ideals seems wild, it is not as uncommon as one
19
might think. We shall contemplate some examples of familiar ideals that are regular.
Example 3.1.1. If R is a ring with unity, then every (left, right, two-sided) ideal of R
is regular. That is, for any ideal I of R, r − r(1R ) = 0 ∈ I and r − (1R )r = 0 ∈ I for any
r in R.
Example 3.1.2. Consider the non unital ring 2Z and its ideal 10Z. Since 2Z is commuta-
tive, right regular and left regular are the same notion. Now, consider −4 ∈ 2Z: if n ∈ 2Z,
then n = 2k for some integer k. Thus, n − n(−4) = n + 4n = 5n = 5(2k) = 10k ∈ 10Z
for any n in 2Z. So, 10Z is a regular ideal of 2Z.
Lemma 3.1.1. If ρ is a proper regular left ideal of R, then there exists a maximal regular
left ideal ρ0 of R so that ρ ⊂ ρ0 .
Proof. Let ρ be a proper, regular left ideal of R and let a ∈ R so that r − ra ∈ ρ for all
r ∈ R. Note that a ∈
/ ρ, or else ra ∈ ρ meaning r ∈ ρ, ∀r ∈ R, or ρ = R. Let M be the
set of all proper left ideals of R which contain ρ. Note once more that if ρ0 ∈ M, then
/ ρ0 . Otherwise, since r − ra ∈ ρ and ρ ⊂ ρ0 means both r − ra and ra are in ρ0 for all
a∈
r ∈ R, then r ∈ ρ, ∀r ∈ R contradicting ρ0 as being a proper left ideal of R. Now, we will
apply Zorn’s lemma. Since ρ ⊂ ρ, then ρ ∈ M, meaning M is nonempty. Moreover, M
is a partially ordered set under set containment. Lastly, Suppose ρ ⊂ ρ1 ⊂ ρ2 ⊂ ρ3 ⊂ ...
is a chain of ideals from M. Let U be the union of all the ideals from the chain of ideals.
Clearly, U is an upperbound to the chain of ideals as for set containment. Hence, all that
is needed to show is that the set U belongs in M. Hence, we have to show U is a left ideal
of R that contains ρ. In fact, we know ρ ⊂ U since U is the union of sets that contain ρ.
Now, since ρ ⊂ U , then ρ being a left ideal of R implies 0 ∈ U , so U is nonempty. Next,
let u ∈ U . Since u ∈ U , ∃ρu in the chain of ideals so that u ∈ ρu . Since ρu is a left ideal,
it is certainly an additive subgroup of R, thus, −u ∈ ρu . Since ρu ⊂ U , −u ∈ U , meaning
elements of U have additive inverses. Now, let x, y ∈ U . As explained before, x ∈ ρx and
y ∈ ρy for some ρx , ρy in the chain of ideals. Since both ρx and ρy are both in the chain,
20
Now that we have made ourselves familiar with regular ideals, let us digress to
our previous discussion of creating simple modules. If R is a ring and I is a left ideal of
R, then R/I will be without nontrivial submodules provided I is a maximal left ideal of
R. However, to make sure R will not annihilate R/I, having I as a regular ideal will do
the trick. Surprisingly, we will discover that any simple left R-module is isomorphic to a
quotient group R/I where I is as maximal left ideal which is also regular.
Example 3.2.1. Consider Z6 with the standard Z-module structure for abelian groups.
What is A(Z6 )? Note that even though 3 · 2 = 0 and 3 · 4 = 0, 3 does not annihilate all
of Z6 . Hence, 3 ∈
/ A(Z6 ). Upon close examination, we can see A(Z6 ) = 6Z. In general,
A(Zn ) = nZ.
Note that in the previous example, the annihilator of the module was an ideal of
the ring. This is not an accident, as we shall see in the next lemma. In fact, the definition
of an annihilator is strong enough to make ideals from mere subset of a module.
Proof. First, we will show that if B ⊂ A, then A(B) is a left ideal. First, 0R ∈ A(B),
so A(B) is nonempty. Second, suppose r, s ∈ A(B). Let b ∈ B and consider (r + s)b =
rb + sb = 0 + 0 = 0. Since (r + s)b = 0 for all b ∈ B, then r + s ∈ A(B). Third, if
r ∈ A(B), then (−r)b = −rb = −0 = 0, meaning −r ∈ A(B). Thus, A(B) is a subgroup
of R. Now, for subring, let r, s ∈ A(B) and let b ∈ B. Since (rs)b = r(sb) = r(0) = 0 for
all b ∈ B, then rs ∈ A(B). Now that we know A(B) is a subring of R, we will now show
that A(B) is a left ideal. Let r ∈ R and i ∈ A(B). To show ri ∈ A(B), let b ∈ B. Then,
(ri)b = r(ib) = r(0) = 0, ∀b ∈ B. Hence, ri ∈ A(B), making it a left ideal.
Now, suppose that this time B is a submodule of A. Since submodules are
subsets, when know that A(B) is a left ideal. To show it is an ideal, we will show it is
also a right ideal. Let r ∈ R, i ∈ A(B) and b ∈ B. To show ir ∈ A(B), we must show
that (ir)b = 0 ∀b ∈ B. We know (ir)b = i(rb), however, we do not know if rb = 0 since
r is just a random element of R and b a random element of B. The difference from last
time is that we know that B is an submodule of A, which means that we definitely know
rb ∈ B. Since i ∈ A(B) and rb ∈ B, then we know i(rb) = 0, which means (ir)b = 0,
meaning A(B) is a two-sided ideal.
Clearly, the zero element is not the only element of a ring that can annihilate
its module elements. However, if zero is the only element that ubiquitously annihilates
every module element, we call the module faithful. In the case where the module also
happens to be simple, we say the ring is primitive.
Definition 3.2.2. A (left) module M is faithful if its (left) annihilator A(M ) is (0). A
ring R is (left) primitive if there exists a simple faithful (left) R-module.
Example 3.2.3. Continuing with the above example, since R is also an R-module where
only zero annihilates all of R, then R is a faithful R-module. Moreover, since submodules
of R translates to ideals of R in this case, we can see that R is a simple module. Therefore,
R is a primitive ring.
The reason why fields were used as examples foreshadows an interesting fact
about commutative primitive rings. We will take a closer look at primitive rings right
23
before tackling the last Wedderburn-Artin theorem. For the moment, let us observe one
more fact concerning annihilators.
Proof. Let M be a left R-module. Recall from Lemma 3.2.1 that, since M is a submodule
of itself, A(M ) is an ideal of R. So, R/A(M ) is a ring. Define for m ∈ M and r +A(M ) ∈
R/A(M ), the action (r+A(M ))m = rm. Let us show this action is well-defined: Suppose
a + A(M ) = b + A(M ). Then, a − b ∈ A(M ). Therefore, (a − b)m = 0 for all m ∈ M . So,
am − bm = 0 implies am = bm for all m ∈ M . In short, (a + A(M ))m = (b + A(M ))m for
all m ∈ M , so the action of R/A(M ) on M is well-defined. Now, recall that M is an R-
module, so verifying the module axioms is almost trivial. First, let r + A(M ) ∈ R/A(M )
and let m, n ∈ M . Then, (r + A(M ))(n + m) = r(n + m) = rn + rm = (r + A(M ))n +
(r + A(M ))m. Second, let r + A(M ), s + A(M ) ∈ R/A(M ) and let m ∈ M , then
[(r + A(M )) + (s + A(M ))]m = (r + s + A(M ))m = (r + s)m = rm + sm = (r +
A(M ))m+(s+A(M ))m. Third, let r+A(M ), s+A(M ) ∈ R/A(M ) and let m ∈ M , then
[(r+A(M ))(s+A(M ))]m = (rs+A(M ))m = (rs)m = r(sm) = (r+A(M ))[(s+A(M ))m].
Thus, M is an R/A(M )-module. Now, to show M is faithful, suppose (r + A(M ))m = 0
for all m ∈ M . Then, rm = 0 for all m ∈ M . This implies that r ∈ A(M ); hence, if
(r + A(M ))m = 0, then r + A(M ) = 0 + A(M ), the zero element.
Example 3.2.4. Recall that Z6 is a Z-module. Notice that the set C = {0, 2, 4} is a
simple submodule of Z6 with A(C) = 3Z. By Lemma 3.2.2, C is a faithful Z/3Z-module.
Hence, the ring Z/3Z is primitive.
Given how a ring R could in fact have elements that turn all the module elements
into the zero element, it would be nice to have a way to categorize a ring by how volatile
it might be to its modules. While the annihilator of an R-module M can tell us which
elements annihilate the elements of M , this set is only specific to the relevant R-module.
Moreover, considering all of the modules of a ring sounds pretty daunting, so we would
like to focus on the simple modules of a ring first. On that note, we introduce The
Jacobson Radical.
Definition 3.2.3. The Jacobson radical of R, written as J(R), is the set of all elements
of R which annihilate all the simple R-modules. If R has no simple R-modules, then
24
J(R) = R and we call R a radical ring. On the other hand, If J(R) = (0), then we say R
is semi-simple.
While an left ideal I of a ring R absorbs ring elements from the left, this new
set consists of elements of R that absorb ring elements into I from the right. Naturally,
if I is two-sided, then I ⊂ (I : R). In any case, we will see this set is of great importance
in moving our focus away from the modules themselves.
Proposition 3.2.1. If M = R/ρ for some maximal regular left ideal ρ, then A(M ) =
(ρ : R), and (ρ : R) is the largest two-sided ideal of R inside ρ.
Proof. Suppose M = R/ρ for some maximal regular left ideal ρ. Let x ∈ A(M ), Then
xM = (0); meaning, x(r + ρ) = ρ for all r ∈ R. Therefore, xr ∈ ρ, ∀r ∈ R. Hence,
xR ⊂ ρ and thus x ∈ (ρ : R). Now, let x ∈ (ρ : R). Then, since xR ⊂ ρ, xr ∈ ρ, ∀r ∈ R.
Then, ρ = xr + ρ = x(r + ρ), ∀r ∈ R. So x(R/ρ) = (0), implying x ∈ A(M ). Therefore,
A(M ) = (ρ : R). Moreover, Since ρ is regular, ∃a ∈ R so that r − ra ∈ ρ for all r ∈ R.
Let x ∈ (ρ : R). Consequently, x ∈ R, then x − xa ∈ ρ. However, since x ∈ (ρ : R)
and a ∈ R, then xa ∈ ρ. If xa ∈ ρ and x − xa ∈ ρ, then x ∈ ρ. Hence, (ρ : R) ⊂ ρ.
To show that (ρ : R) is the largest ideal of ρ, suppose there is an ideal J of ρ so that
(ρ : R) ⊂ J ⊂ ρ. If j ∈ J, then j ∈ R. Now, j ∈ J, a two-sided ideal, and J ⊂ ρ implies
jr ∈ ρ for all r ∈ R; hence by definition, j ∈ (ρ : R). Therefore, J ⊂ (ρ : J).
25
T
Hence, for any ring R, J(R) = (ρ : R), where ρ runs over all regular maximal
left ideal of R. Now, all there is left to do is to keep track of (ρ : R) for each maximal
regular left ideals ρ of R. While it sounds better than looking at every simple module
of a ring, it still feels unsatisfactory. We can still do better, as we will see in the next
section.
3.3 Quasi-regular
If the idea of regular ideals seemed bizarre, then we are in for a treat with the
introduction of quasi-regular ideals. While the definition of a regular ideal states nothing
about the element of the ideal, a quasi-regular ideal is an ideal where all of its elements
are quasi-regular.
Example 3.3.2. In Z, the only quasi-regular elements are 0 and −2, where they are
x
both their own quasi-inverse. (Again, observe the solutions to y = − x+1 for x, y ∈ Z.)
Proof. Let I be a left-quasi-regular left ideal of R. Suppose then that I 6⊂ J(R). If so,
then for some simple R-module M , we must have IM 6= (0). In turn, it must mean that
26
there is at least one element m ∈ M so that Im 6= (0). Consequently, it turns out that
Im is a submodule of M . Since 0 ∈ I, 0m = 0 ∈ Im, so Im is nonempty. Now, if a ∈ Im,
then a = im for some i ∈ I. Since i ∈ I, then −i ∈ I, meaning (−i)m = −im = −a ∈ Im.
Now, if a, b ∈ Im, say a = im and b = jm, then a + b = im + jm = (i + j)m ∈ Im.
Now that we know Im is a subgroup of M , let us confirm the module part. Let r ∈ R
and let b ∈ Im with b = jm, j ∈ I. Consider rb = r(jm) = (rj)m: since r ∈ R
and j ∈ I, I a left ideal of R, then rj ∈ I. Hence, rb ∈ Im for all r ∈ R, b ∈ Im,
so Im is a submodule of M . However, since Im is a nonzero submodule of M , then
Im = M . Well, −m ∈ M implies −m ∈ Im, so ∃t ∈ I so that −m = tm. Moreover, t is
a member of a left-quasi-regular ideal, so ∃s ∈ R so that s + t + st = 0. It follows that
0 = 0m = (s + t + st)m = sm + tm + stm = sm + (−m) + s(−m) = sm − sm − m = −m.
Since −m = 0, then m = 0, which would make Im = (0), a contradiction. Therefore, it
must be that I ⊂ J(R).
As Lemma 3.3.1 shows, from now on we can instantly declare that any quasi-
regular ideal is contained in the Jacobson radical. This will prove be immensely useful.
One application we can observe is that if a ring is semi-simple, then the only quasi-regular
ideal is the trivial ideal. Still, the most crutial implication of Lemma 3.3.1 is the following
theorem.
T
Theorem 3.3.1. J(R) = ρ, where ρ runs over all maximal regular left ideals of R.
T T
Proof. We know that J(R) = (ρ : R) and that (ρ : R) ⊂ ρ, therefore J(R) ⊂
ρ, where
T
ρ runs over all the maximal regular left ideals. Now, for the converse, suppose x ∈ ρ.
Consider the set S = {y +yx | y ∈ R}. We claim R = S. Otherwise, suppose S is a proper
subset of R. As it happens, the set S is a left ideal of R. Since 0 ∈ R, then 0+0x = 0 ∈ S.
Next, assume a ∈ S, then ∃c ∈ R so that a = c + cx. Since c ∈ R, then −c ∈ R, meaning
(−c) + (−c)x ∈ S. Of course, (−c) + (−c)x = −c − cx = −(c + cx) = −a, so −a ∈ S.
Moreover, if a and b belong in S, say a = c + cx and b = d + dx for some c, d ∈ R, then
a + b = (c + cx) + (d + dx) = (c + d) + (c + d)x ∈ S and ab = a(d + dx) = (ad) + (ad)x ∈ S
(Note this last part shows both closure under multiplication and the general ideal of how
S is a left ideal). Therefore, S is a left ideal of R. Furthermore, since r − r(−x) ∈ S for
all r ∈ R, then S is regular. By Lemma 3.1.1, since S is a proper regular left ideal of R,
T
S is contained in a maximal regular left ideal ρ0 . Now, since ρ is the intersection of all
27
the maximal regular left ideal of R and ρ0 is a maximal regular left ideal of R, then we
see that x ∈ ρ0 . However, since y + yx ∈ ρ0 (S ⊂ ρ0 ) and yx ∈ ρ0 (x is in the left ideal
ρ0 ) for all y ∈ R, then R ⊂ ρ0 . This contradicts ρ0 being a maximal ideal. Therefore, S
is not a proper subset but instead all of R. Now, since R = {y + yx | y ∈ R} and −x ∈ R,
T
∃w ∈ R so that −x = w + wx, or in other words w + x + wx = 0. In conclusion, if x ∈ ρ,
T
then x is left-quasi-regular. This makes ρ a left-quasi-regular left ideal of R, and thus
T
ρ ⊂ J(R) by Lemma 3.3.1.
Example 3.3.3. Since Z has unity, every ideal of Z is regular. Now, recall that every
maximal ideal of Z is of the form pZ, where p is a prime number. Moreover, if n and m
are integers, then nZ ∩ mZ = dZ, where d is the least common multiple of n and m. The
point being, the intersection of all maximal ideals of Z is principal and generated by the
only common multiple across all prime numbers: 0. Hence, J(Z) = (0)
Let us backtrack a bit. Notice that in the proof, since we showed that both
T T
J(R) = ρ and that ρ is a left-quasi-regular ideal of R, then J(R) is a left-quasi-
regular ideal of R. Moreover, by Lemma 3.3.1, J(R) is the biggest (contains every other)
left-quasi-regular ideal of R. Hence, we have the next theorem
Theorem 3.3.2. J(R) is a left-quasi-regular ideal of R and contains all the left-quasi-
regular left ideals of R. Moreover, J(R) is the unique maximal left-quasi-regular left ideal
of R.
Now, suppose there are elements a, b and c in a ring R where b is the left-
quasi-inverse of a and c is the right-quasi-inverse of a. So, we know b + a + ba = 0 and
a + c + ac = 0. Then, we have (b + a + ba)c = bc + ac + bac = 0 and b(a + c + ac) =
ba + bc + bac = 0. In other words, we get bc + ac + bac = ba + bc + bac. Subtract bc and
bac from both sides, we get ba = ac. Again, since b + a + ba = 0 = a + c + ac, subtracting
the elements that are equal we get b = c. Therefore, if an element has a left and a right
quasi-inverse, the two quasi-inverses are the same.
28
Proof. Consider R/J(R) and let ρ be a maximal regular left ideal of R. We know that
J(R) ⊂ ρ by Theorem 3.3.1. Now, we will prove that ρ/J(R) is maximal in R/J(R).
Suppose ρ/J(R) ⊂ I/J(R) ⊂ R/J(R) and ρ/J(R) 6= I/J(R). Then, consequently ρ ⊂
I ⊂ R, where ρ 6= I. However, since ρ is maximal in R, it implies I = R. Hence,
I/J(R) = R/J(R), meaning ρ/J(R) is maximal in R/J(R). Moreover, since ρ is left
regular in R, ρ/J(R) will also be regular in R/J(R). Say a ∈ R so that r − ra ∈ ρ,
∀r ∈ R, then (r + J(R)) − (r + J(R))(a + J(R)) = (r + J(R)) − (ra + J(R)) = (r −
ra) + J(R) ∈ ρ/J(R) ∀r ∈ R. In short, the image of all of the maximal regular left
ideals of R will remain maximal regular left ideals in R/J(R). However, this does not
imply that all of the maximal regular left ideals of R/J(R) come from R. On that
29
T T
note, if J(R) = ρ, then ( ρ)/J(R) = J(R)/J(R) = (0) in R/J(R). One thing
T T T
that we can say is that ( ρ)/J(R) = (ρ/J(R)). Let x + J(R) ∈ ( ρ)/J(R), then
T T
x ∈ ρ, so x + J(R) ∈ ρ/J(R) for all ρ, and thus x + J(R) ∈ (ρ/J(R)). On the
T
other hand, let x + J(R) ∈ (ρ/J(R)), then x + J(R) ∈ ρ/J(R) for all ρ. Moreover,
if x + J(R) ∈ ρ/J(R) for each ρ, then x ∈ ρ for each ρ: Say x + J(R) = r + J(R)
for some r ∈ ρ, then x − r ∈ J(R) ⊂ ρ; since both x − r and r belong to ρ, then
T
x ∈ ρ. Therefore, x + J(R) ∈ ρ/J(R) for all ρ implies x is in all ρ and thus x ∈ ρ,
T
meaning x + J(R) ∈ ( ρ)/J(R). Recalling that J(R/J(R)) is the intersection of all
maximal regular left ideals of R/J(R), we can show why R/J(R) is a semi-simple ring:
T T
J(R/J(R)) ⊂ (ρ/J(R)) = ( ρ)/J(R) = (0).
Let us talk about the relation between the two-sided ideals of a ring and the
ring’s Jacobson radical. Let A be an ideal of a ring R. If A is an ideal of R, then A is a
subring of R. So, if A is an ideal of a ring, then A itself is a ring. If that is the case, can
we relate the Jacobson radical of A with the Jacobson radical of R? The answer turns
out to be quite satisfying, as we see in the following theorem and corollary.
Proof. Let A be an ideal of R. Now, let a ∈ A ∩ J(R). So, a ∈ J(R). Since J(R) is
left-quasi-regular, then ∃b ∈ R so that b + a + ba = 0. Solving for b, we get b = −a − ba.
Since also a ∈ A, an ideal, and b ∈ R, then both ba and −a belong in A, meaning
b = −a − ba ∈ A. Explicitly, this shows that A ∩ J(R) is a left-quasi-regular ideal of A,
therefore A ∩ J(R) ⊂ J(A) by Lemma 3.3.1.
Now, suppose ρ is a maximal regular left ideal of R. Let ρA = ρ ∩ A. If A ⊂ ρ,
then ρA = A, meaning J(A) ⊂ ρA . In other words, J(A) is a subset of ρA for all maximal
regular left ideal ρ of R that contain A. Then, let us examine all of the maximal regular
left ideals of R that do not contain A. If A 6⊂ ρ, then the maximality of ρ forces A+ρ = R.
That is, since both A and ρ are left ideals of R, then A + ρ is a left ideal of R. Moreover,
since A 6⊂ ρ, then ∃a ∈ A so that a ∈
/ ρ. Since a ∈ A and 0 ∈ ρ, then a = a + 0 ∈ A + ρ.
Therefore, ρ is a proper subset of A + ρ. As a result, since ρ is a left maximal ideal of R
and A + ρ a bigger left ideal than ρ in R, then A + ρ = R. Now, we can apply the Second
Isomorphism Theorem:
R/ρ = A+ρ ∼
= A
= A/ρA
ρ A∩ρ
30
Now, since R/ρ is simple, then A/ρA is simple, making ρA a maximal left ideal
of A. Moreover, since ρ is regular, ∃b ∈ R so that x − xb ∈ ρ for all x in R. Yet, since
b ∈ R = A+ρ, b = a+r for some a ∈ A, r ∈ ρ. Then, x−xb = x−x(a+r) = x−xa−xr ∈ ρ.
Of course, since r ∈ ρ, xr ∈ ρ, which implies that really x−xa ∈ ρ for all x ∈ R. Therefore,
if we let instead x ∈ A, then we know x − xa ∈ ρ and also x − xa ∈ A (A is an ideal,
so both x and xa belong in A), meaning ∃a ∈ A so that x − xa ∈ ρA for all x ∈ A; so
ρA is a regular left ideal in A specifically. In fact, ρA is a maximal regular left ideal of
(specifically) A. Therefore, whether A ⊂ ρ or A 6⊂ ρ, J(A) ⊂ ρA for all maximal regular
T T
left ideals ρ of R. Hence, J(A) ⊂ ρA = ( ρ) ∩ A = J(R) ∩ A. So, A ∩ J(R) ⊂ J(A)
and J(A) ⊂ A ∩ J(R), so A ∩ J(R) = J(A).
Chapter 4
Now that we have covered semi-simple rings, let us move on to Artinian rings.
We will see that an Artinian ring is a ring in which every descending chain of ideals
becomes stationary. Meaning, there is an ending point to every descending chain of
proper ideals. Notice there are two key components for a desending chain of ideals in an
Artinian ring: finiteness and stability. Finiteness in the respect that, in a chain of ideals,
there is eventually an end to the difference in size of the ideals and a final destination
is reached. Stability in the sense that once a certain point is reached, the ideals remain
the same thereafter. As we shall see, nilpotency and idempotency respectively exemplify
these characteristics and are closely related to Artinian rings.
In this chapter, we will see what nilpotency, idempotency, and Artinian rings
have to say about the Jacobson radical. First, we will explore what nilpotency has to do
with the Jacobson radical. Next, we will start assuming our ring to be Artinian and see
what implications arise. Finally, idempotency will prove to be ever present in Artinian
rings. Once we explore these three facets of Artinian rings, we will make our way to the
Wedderburn-Artin theorems.
32
4.1 Nilpotence
(ii) (right, left, two-sided) ideal I is nil if all elements of I are nilpotent.
Out of all the topics outside basic ring theory, nilpotency is the least strange.
While studying rings such as Zn and M atn (D), nilpotent elements have been present.
Example 4.1.2. For any ring R and any positive integer n, any matrix in M atn (R) that
is either upper triangular or lower triangular is nilpotent.
b + a + ba = −a + a2 − a3 + . . . + (−1)m−1 am−1 + a
= (−1)m−1 am
=0
Similarly:
a + b + ab = a + −a + a2 − a3 + . . . + (−1)m−1 am−1
= (−1)m−1 am
=0
Lemma 4.1.1 illustrates the importance of nilpotency. The lemma implies that
every nil right or left ideal of a ring R is left or right quasi-regular, thus contained in J(R).
On the other hand, if R is semi-simple, then there are no nonzero nil ideals. Similarly, if
R is a nil ideal of R, then it is a radical ring.
Example 4.1.3. Recall that in Zpk , the set P = {pm | m ∈ Z} contains only nilpotent
elements. Additionally, P is an ideal of Zpk , so P is a nil ideal. Therefore, P itself as a
ring is a radical ring (that is, J(P ) = P ).
Proof. Let I be a nilpotent ideal of a ring R and let m ∈ N so that I m = (0). So, if a ∈ I,
then am ∈ I m . Hence, am ∈ (0), meaning am = 0. Thus, every element of I is nilpotent,
making I a nil ideal.
Since all nil ideals are contained in the Jacobson radical of a ring, then all
nilpotent ideals are contained in the radical. Thus, if a ring R is semi-simple, then we
know there are no nonzero nilpotent (and no nil) ideals. Again, if R is a nilpotent ideal
of R, then R is a radical ring.
Example 4.1.4. Let n be a positive integer and consider Zn . Suppose n = pk11 · · · pkr r ,
where pi are prime numbers and ki ≥ 1 for all i. Let m = p1 · · · pr , then m is nilpotent
in Zn . Particularly, if k = max{ki | 1 ≤ i ≤ r}, then mk = pk1 · · · pkr = nq, for some q ∈ Z.
On that note, the ideal M = {ms | s ∈ Z} is nilpotent since M k = (0). Again, as a ring,
M is a radical ring.
Finally, we can discuss Artinian rings. Artinian rings generalize finite rings as
well as rings that are finite-dimensional vector spaces over division rings. In addition,
Artinian rings are seen as the duals to Noetherian rings in respect to chains of ideals.
Regardless, Artinian rings are those in which any non-empty set of ideals has a minimal
element under set containment. We shall show that an Artinian ring is equivalently a ring
in which any descending chain of ideals terminates. Afterwards, we shall briefly explore
facts about Artinian rings.
34
Definition 4.2.1. A ring is said to be (left, right) Artinian if any non-empty set of (left,
right) ideals has a minimal element.
From now on, we shall say Artinian for left Artinian, as we still have many left
ideals to deal with. There is a famous, equivalent definition for Artinian rings that we
will use frequently. Theorem 4.2.1 illustrates what an Artinian ring means in terms of
chains of ideals. In various cases, we will deal with descending chains of ideals. Therefore,
Theorem 4.2.1 will be of great use. Also, note that the Axiom of Choice is necessary for
the converse of the theorem.
Theorem 4.2.1. A ring R is Artinian if and only if any descending chain of left ideals
of R becomes stationary.
Artinian rings posses multiple helpful properties. For this paper, we shall recall
a handful of facts that will prove to be invaluable.
Lemma 4.2.1. If R is a Artinian ring and I 6= (0) is a left ideal of R, then I contains
a minimal left ideal of R.
Proof. Let R be an Artinian ring and let I 6= (0) be a left ideal of R. Let M be the set
of all nontrivial left ideals of R that are contained in I. Of course, I ⊂ I, so M is not
empty. Since M is a set of left ideals of R and R is Artinian, then there exists a left ideal
µ in M that is minimal in M. To prove that µ is a minimal ideal of R, suppose there
exists a left ideal J 6= (0) of R such that (0) ⊂ J ⊂ µ. If so, then J ⊂ µ ⊂ I and J being
35
Example 4.2.1. Any division ring is Artinian. Recall that every (left, right, two-sided)
ideal is either the trivial ideal or the entire ring.
left ideal of R and I ⊂ f −1 (Ji ) for all i. On that note, we can see that f (I) ⊂ Ji for all i.
Let x ∈ f (I) with x = f (a), for some a ∈ I. Since I ⊂ f −1 (Ji ) for all i, then a ∈ f −1 (Ji )
for all i. Therefore, f (a) ∈ Ji , meaning x ∈ Ji . Lastly, in order to claim f (I) is the
minimal element in the descending chain, we must be sure that f (I) is an ideal of f (R).
Of course, I being a subring of R makes f (I) a subring of f (R). Now, let f (r) ∈ f (R)
and f (i) ∈ f (I), then f (r)f (i) = f (ri) ∈ f (I) since ri ∈ I. Hence, f (I) is a left ideal of
f (R), making f (I) the minimal element in the chain J1 ⊃ J2 ⊃ J3 ⊃ · · · .
Recall that in the previous section, we discovered that all nilpotent ideals of a
ring R are contained in the Jacobson radical J(R). Mainly, we noted that all nilpotent
ideals are nil ideals and all nil ideals are quasi-regular ideals. In turn, not only did J(R)
contain all of the quasi-regular ideals of R, but J(R) itself was quasi-regular. Similarly,
since we know that J(R) contains all nil and nilpotent ideals, can we say that J(R) is
either of them?
While we have seen that every nilpotent ideal is a nil ideal, the converse is not
always true. However, if a nil ideal is an ideal of an Artinian ring, then Theorem 4.2.3
shows why such an ideal must also be nilpotent.
Proof. Let I be a nil one-sided ideal. Since every element of I is nilpotent, then by
Lemma 4.1.1 every element of I is quasi-regular. Hence, I ⊂ J(R). Now, since R is
Artinian, J(R) is nilpotent; let m ∈ N such that (J(R))m = (0). Hence, all there is
left to do is prove I m ⊂ (J(R))m . Let x ∈ I m with x = a1 a2 . . . am , where ai ∈ I for
i = 1, 2, . . . , m. Since I ⊂ J(R), then we can say x = a1 a2 . . . am , with ai ∈ J(R) for
i = 1, 2, . . . , m. Hence, x ∈ (J(R))m . So, I m ⊂ (J(R))m ⊂ (0), so I m is nilpotent.
4.3 Idempotency
Idempotent elements are those which remain unchanged when raised to a higher
power. Mainly, as we may notice, the square of an element need only be itself in order
to have the element to be idempotent. In any case, since idempotent elements are ever
present in Artinian rings, we shall prove some valuable theorems involving idempotent
elements. But first, let us look at the definition, and some examples, of idempotent
elements.
38
Example 4.3.2. Let D be a division ring and consider M atn (D) for n > 1. Let Exy be
the n × n matrix with exy = 1 and all other entries equal to zero (1 ≤ x, y ≤ n). Then,
not only are all Eii idempotent, but also E11 + Enn . For a concrete example, in M at2 (Z),
any matrix such that a11 = a12 = k and a21 = a22 = 1 − k for k ∈ Z is idempotent.
Lemma 4.3.1. Let R be a ring having no nonzero nilpotent ideals. Suppose that µ 6= (0)
is a minimal left ideal of R. Then µ = Re for some idempotent e ∈ R.
Proof. Suppose R is a ring having no non-zero nilpotent ideals and suppose that µ 6= (0)
is a minimal left ideal of R. Since R has no non-zero nilpotent ideal and µ 6= (0), then we
know µ2 6= (0). So, it is safe to say that there exists x ∈ µ so that µx 6= (0). Moreover,
µx is a left ideal of R. First, since µ is a left ideal, 0 ∈ µ, therefore 0x = 0 ∈ µx, so µx is
non-empty. Second, suppose a ∈ µx with a = mx, where m ∈ µ, then −m ∈ µ and thus
(−m)x = −mx = −a ∈ µx, so µx has inverses. Third, suppose a, b ∈ µx with a = mx,
b = nx (m, n ∈ µ), then a + b = mx + nx = (m + n)x ∈ µx (m + n ∈ µ). Moreover,
ab = (mx)(nx) = ((mx)n)x ∈ µx since n ∈ µ (a left ideal) implies (mx)n ∈ µ (this last
part shows also why µx is a left ideal of R). Additionally, since x ∈ µ, then mx ∈ µ for
all m ∈ µ, meaning µx ⊂ µ. Since µx ⊂ µ, µx 6= (0), then µ being a minimal ideal of R
implies µx = µ. Therefore, since x ∈ µ, there exists an element e ∈ µ so that x = ex. On
that note, by multiplying e on the left, we get ex = e2 x; in other words, (e2 − e)x = 0.
Consider the set C = {c ∈ µ | cx = 0}; C is a left ideal of R. First, 0 ∈ µ and 0x = 0,
so 0 ∈ C. Second, let c ∈ C. Then, cx = 0 implies (−c)x = −cx = −0 = 0. Of course,
c ∈ C implies c ∈ µ, so −c ∈ µ; Hence, C has inverses. Third, let c, d ∈ C, then c + d ∈ µ
and (c + d)x = cx + dx = 0 + 0 = 0, so c + d ∈ C. Lastly, let r ∈ R and c ∈ C, then
c ∈ µ implies rc ∈ µ and (rc)x = r(cx) = r0 = 0, so C is a left ideal of R. By definition,
C ⊂ µ. Since µ is a minimal left ideal, either C = µ or C = (0). However, if C = µ,
then it would imply µx = (0). Of course, µx 6= (0), so C 6= µ. Thus, C = (0). Since
e2 − e ∈ C, then e2 − e = 0, so e2 = e. Also, since µx 6= (0), we can be sure that x 6= 0.
Consequently, since ex = x 6= 0, we can be sure that e 6= 0. We conclude that e is an
idempotent of R. Now, consider the left principal ideal Re. If re ∈ Re, since e ∈ µ, then
re ∈ µ; so, Re ⊂ µ. Again, µ being a minimal ideal of R implies Re = (0) or Re = µ.
39
There are a few things to note about the previous lemma. First, recall that
a semi-simple ring is a ring that has no nonzero nilpotent ideals. Hence, we can apply
Lemma 4.3.1 to any semi-simple ring. Furthermore, since any nonzero left ideal of an
Artinian ring contains a minimal left ideal, we shall take full advantage of this lemma
for the structure of semi-simple Artinian rings. As for rings that are Artinian but not
semi-simple, there is a theorem that guarantees the existence of idempotent elements on
nonzero left ideals. First, however, we must prove the following peculiar lemma.
Lemma 4.3.2. Let R be a ring and suppose that for some a ∈ R, a2 − a is nilpotent.
Then either a is nilpotent or, for some polynomial q(x) with integer coefficients, e = aq(a)
is a non-zero idempotent.
Proof. Let R be a ring and suppose that for some a ∈ R, k ∈ N, (a2 −a)k = 0. Expanding
(a2 − a)k = a2k + . . . + (−a)k = 0 and solving for ak , we get ak = ak+1 p(a), where
p(x) has integer coefficients. Of course, ak+1 = ak a, then ak = ak+1 p(a) = ak ap(a) =
(ak+1 p(a))ap(a) = ak+2 p(a)2 . Therefore, repeating this process, eventually we get that
ak = a2k p(a)k . Now, either ak = 0 or ak 6= 0. However, if ak = 0, then by definition
a is nilpotent. So, suppose ak 6= 0, then e = ak p(a)k 6= 0 and e2 = a2k p(a)2k =
(a2k p(a)k )p(a)k = ak p(a)k = e. Say q(a) = ak−1 p(a)k , then e is the non-zero idempotent
we were looking for.
The last result of this chapter will be used to find minimal ideals. The more
we progress, the more is evident that minimal ideals are becoming integral to the study
of semi-simple Artinian rings. Still, the following theorem will be used mainly in our
conclusion chapter and will allow us to test and find minimal ideals by using idempotent
elements.
Theorem 4.3.2. Let R be a ring having no nonzero nilpotent ideals and suppose that e
is an idempotent element in R. Then, Re is a minimal left ideal of R if and only if eRe
is a division ring.
Proof. Let R be a ring having no nonzero nilpotent ideals and suppose Re is a minimal
left ideal of R with e being idempotent. First, we can see that eRe is a ring with unity
e. First, e0e = 0 ∈ eRe. Second, if eae ∈ eRe, then e(−a)e = −eae ∈ eRe. Third,
if eae, ebe ∈ eRe, then eae + ebe = e(ae + be) = e((a + b)e) = e(a + b)e ∈ eRe and
(eae)(ebe) = eae2 be = e(aeb)e ∈ eRe. To show e is the multiplicative identity of eRe,
surely eee = e ∈ eRe and for any eae ∈ eRe, we have e(eae) = e2 ae = eae = eae2 =
(eae)e. Lastly, to prove eRe is a division ring, consider a nonzero element eae ∈ eRe.
Obviously, Reae is a principal left ideal of R. Now, note that Reae ⊂ Re: if reae ∈ Reae,
then since rea ∈ R, we get (rea)e ∈ Re. Moreover, since e ∈ R, then e(eae) = eae ∈ Reae,
meaning Reae 6= (0). Since Reae is a nonzero left ideal inside the minimal left ideal Re,
then Re = Reae. Since e = ee ∈ Re, there exists b ∈ R so that e = beae. Upon
further calculation, we see that e2 = ebeae, meaning e = (ebe)(eae). Therefore, eae has
a multiplicative inverse, which means eRe is a division ring.
Now, for the converse, suppose R is a ring that has no nonzero nilpotent ideals
and e an idempotent of R with eRe being a division ring. To prove Re is a minimal
left ideal, suppose µ ⊂ Re and µ 6= (0). If eµ = (0), then we can see that µ2 = µµ ⊂
41
Proof. Since the right analogue of Theorem 4.3.2 holds, we can say that eR is a minimal
right ideal if and only if eRe is a division ring. Thus, Re being a minimal ideal of R is
logically equivalent to eR being a minimal ideal of R.
Example 4.3.3. Since Z10 is unitary, every ideal of Z10 is regular. By intersecting the
only two maximal ideals of Z10 , h2i and h5i, we see that Z10 is semi-simple. Therefore,
Z10 has no nonzero nilpotent ideals. Of course, since there are a finite number of ideals,
Z10 is Artinian. Recall that 6 ∈ Z10 is idempotent. We can see that 6Z10 = {0, 2, 4, 6, 8}
is a principal ideal of Z10 and is minimal since |6Z10 | = 5, a prime number. Therefore,
6Z10 6 is a division ring. In fact, since Z10 is commutative, 6Z10 6 = 6Z10 is a field with 6
as its unity.
Example 4.3.4. Let D be a division ring and let R = M atn (D) with n > 1. For each
k so that 1 ≤ k ≤ n, Ik = {(aij ) ∈ R | aij = 0 for j 6= k}. Recall that matrix Ekk is
idempotent in R. Moreover, Ik = R(Ekk ) for all k. Since Ik2 6= (0) for any k, we have that
Ik is a minimal ideal of R if and only if (Ekk )R(Ekk ) is a division ring. As it turns out,
(Ekk )R(Ekk ) = {cEkk | c ∈ D} is a division ring; Ekk is the unity of (Ekk )R(Ekk ), and if
cEkk 6= 0, then c 6= 0, so there exists d ∈ D such that cd = 1, then dEkk ∈ (Ekk )R(Ekk )
and (cEkk )(dEkk ) = Ekk . Hence, Ik is a minimal ideal for all k.
42
Chapter 5
Wedderburn-Artin Theorems
Finally, we have arrived at the main purpose of this paper. The first Wedderburn-
Artin theorem illustrates the structure of semi-simple Artinian rings. As a result, all
theorems leading to it will be about semi-simple Artinian rings. The second Wedderburn-
Artin theorem demontrates the structure of simple Artinian rings. As for the theorems
leading to it, primitivity will play an important role.
First, we will establish the fact that every left ideal of a semi-simple Artinian
ring is principal and generated by an idempotent element. This is an improvement from
Theorem 4.3.1. In addition, the two corollaries that follow Theorem 5.1.1 will detail
important information about two-sided ideals of semi-simple Artinian rings.
Theorem 5.1.1. Let R be a semi-simple Artinian ring and I 6= (0) is a left ideal of R.
Then, I = Re for some idempotent e ∈ R.
Proof. Let R be a semi-simple Artinian ring and I 6= (0) be a left ideal of R. Since R is
semi-simple, I is not nilpotent. Moreover, since I is a non-zero non-nilpotent left ideal of
R, by Theorem 4.3.1, there exists an idempotent e in I. Let A(e) = {x ∈ I | xe = 0}, we
will show A(e) is a left ideal of R. Of course, 0 ∈ I and 0e = 0, so 0 ∈ A(e). Next, say
a ∈ A(e), then −a ∈ I and (−a)e = −ae = −0 = 0, so −a ∈ A(e). Also, if a, b ∈ A(e),
then a + b ∈ I and (a + b)e = ae + be = 0 + 0 = 0, so a + b ∈ A(e). Finally, suppose r ∈ R
and a ∈ A(e), then ra ∈ I and (ra)e = r(ae) = r0 = 0, so ra ∈ A(e). Now, consider the
43
set M = {A(e) | e2 = e 6= 0 and e ∈ I}, the set of all A(e) for idempotent elements e from
I. Since we know there is at least one idempotent element in I, then M is a non-empty
set of left ideals of R. Since R is Artinian, M has a minimal element A(e0 ).
Suppose A(e0 ) 6= (0). As a nonzero, nonnilpotent left ideal of the semi-simple
Artinian ring R, A(e0 ) must have an idempotent e1 . By the definition of A(e0 ), e1 ∈ I
and e1 e0 = 0. Consider the element e∗ = e0 + e1 − e0 e1 . Not only is e∗ ∈ I, but e∗ is
idempotent:
Proof. A ring R is also an ideal of R. Therefore, as we have seen from the previous
corollary, there is a multiplicative identity element in the ideal R for all of the elements
of R. In other words, R has unity.
In order to tackle the First Wedderburn-Artin theorem, we will use the notion
of a Peirce decomposition. As we have shown, semi-simple Artinian rings always contain
a two-sided unit element. Suppose we have an idempotent element e not equal to 1. Then
for any element x of a ring R, we can say x = 1x = 1x + xe − xe = xe + x(1 − e); in other
words, R = Re + R(1 − e). As it will turn out, the sum between Re and R(1 − e) is a
T
direct sum; that is, Re R(1 − e) = (0). This is a Peirce decomposition. Let us observe
the statement and the proof before I give everything away in this paragraph.
R/R(1 − e) = A+R(1−e) ∼
= A
= A/(0) ∼
=A
R(1−e) A∩R(1−e)
an ideal of A), and thus a subgroup of R, we need only show that ABA absorbs elements
P
of R both from the left and from the right. Let r ∈ R and i,j,k ai bj ak ∈ ABA, then
P P P P
r( i,j,k ai bj ak ) = i,j,k (rai )bj ak ∈ ABA and ( i,j,k ai bj ak )r = i,j,k ai bj (ak r) ∈ ABA
since both rai and ak r belong to the ideal A for all i, k.
Now, since A is also a semi-simple Artinian ring by Corollary 5.1.2, there is
1A ∈ A guaranteeing ABA 6= (0). Hence, by the minimality of A in R, ABA = A. Of
course, A = ABA ⊂ B ⊂ A implies B = A, therefore A is a simple ring.
Recall from the Peirce decomposition, R = A ⊕ T0 , where T0 is an ideal of R. Of
course, T0 an ideal of R implies T0 is also a semi-simple Artinian ring by Lemma 5.1.1.
Therefore, since T0 contains a minimal ideal by Lemma 4.2.1, repeat the process and pick
a minimal ideal A1 of R lying in T0 . As we have seen, A1 is a simple Artinian ring and by
Peirce decomposition T0 = A1 ⊕ T1 . Hence, we get R = A ⊕ T0 = A ⊕ A1 ⊕ T1 . Continuing
we get A = A0 , A1 , · · · , Ak , . . ., distinct, simple ideals of R, each of which is Artinian,
and such that the sums A0 + · · · + Ak are all direct. The sum is direct since if i 6= j and
T T
Ai Aj 6= (0), then Ai Aj is an ideal smaller than both Ai and Aj , contradicting their
minimality. Lastly, we claim that for some n ∈ N, R = A0 ⊕ · · · ⊕ An . That is, eventually
we will reach a Tn−1 that is minimal, making Tn−1 = An and finishing the process. If
not, and this process goes on forever, define R0 = A0 ⊕ A1 ⊕ · · · , R1 = A1 ⊕ A2 ⊕ · · · ,
Rm = Am ⊕ Am+1 ⊕ · · · . Then, R0 ) R1 ) · · · ) Rm ) · · · is a descending chain of
proper ideals of R that will not stop, contradicting R as Artinian. Thus, it must be that
R = A0 ⊕ · · · ⊕ An for some n in N.
Proof. Recall that if A is both an ideal of R and simple, then A is a minimal ideal of
R. That is, if µ 6= (0) is an ideal of R and µ ⊂ A, then by simplicity, µ = A. Now, let
R be a semi-simple Artinian ring and R = A1 ⊕ · · · ⊕ Ak , where the Ai are simple. Let
B 6= (0) be a minimal ideal of R. Since R is a semi-simple Artinian ring, then we know
there exists 1 ∈ R. Hence, we know RB 6= (0). Note that RB = A1 B ⊕ · · · ⊕ Ak B 6= (0),
so it must be that for some i between 1 and k, Ai B 6= (0). Of course, both Ai , B being
ideals of R makes Ai B an ideal of R. Note that since Ai B ⊂ Ai , by the minimality of
Ai and Ai B 6= (0), then Ai B = Ai . Similarly, Ai B ⊂ B implies, by the minimality of B,
that Ai B = B. Thus, B = Ai for some i.
47
With everything we have covered up to this point, finding the structure of simple
Artinian rings will be swift. For simple Artinian rings, primitivity will make a comeback.
Consequently, we shall revisit R-modules; simple, faithful R-modules to be precise. The
one topic that will be new in this chapter is density. While there is a connection between
this use of density and density in the topological sense, density here refers to the ring of
endomorphisms over a vector space. Once we reach Jacobson’s density theorem, we will
be ready to prove the second Wedderburn-Artin theorem.
We shall now start using an alternative definition for primitive rings. Theo-
rem 5.2.1 will give us a definition that only deals in ring theory. Since we have managed
to put the Jacobson radical in terms of maximal regular ideals, this new definition con-
nects semi-simplicity with primitivity.
Proof. Suppose R is a primitive ring. Then, by definition, there exists a simple, faithful
R-module M . By Theorem 3.1.1, M ∼ = R/ρ for some maximal regular left ideal ρ. Since
A(M ) = (ρ : R) by Proposition 3.2.1 and M is faithful, then (ρ : R) = (0). Hence, there
exists a maximal regular left ideal ρ ⊂ R such that (ρ : R) = (0).
Now, suppose there exists a maximal regular left ideal ρ ⊂ R such that (ρ : R) =
(0). By Theorem 3.1.1, R/ρ is a simple R-module. Of course, since A(M ) = (ρ : R),
then R/ρ is faithful. Thus, R is primitive.
Proof. Suppose a ring R is both simple and semi-simple. Consider the ideals (ρ : R) over
all of the maximal regular left ideals ρ of R. By the simplicity of R, either (ρ : R) = (0)
T
or (ρ : R) = R for each ρ. If (ρ : R) = R for every ρ, then ρ (ρ : R) = J(R) = R,
48
contradicting R being semi-simple. Thus, for some maximal regular left ideal ρ0 , (ρ0 :
R) = (0). Hence, R is primitive.
Proof. Let R be a simple ring and say 1 ∈ R. Therefore, the trivial ideal (0) is a regular.
By Lemma 3.1.1, there exists a maximal regular left ideal ρ in R that contains (0).
Therefore, R/ρ is a simple R-module. Moreover, on account of R/ρ being simple, by
definition R(R/ρ) 6= (0); thus, A(R/ρ) 6= R. Since R is simple and A(R/ρ) is an ideal,
then A(R/ρ) = (0). Hence, R/ρ is a simple, faithful R-module. Hence, R is primitive.
Definition 5.2.1. Let M be a (left) vector space over a division ring ∆. A ring R is said
to be dense on M if for ever n ∈ N and v1 , · · · , vn in M which are linearly independent
over ∆ and arbitrary elements w1 , · · · , wn in M , there is an element r ∈ R such that
rvi = wi for i = 1, 2, · · · , n.
Why are we talking about rings and vector spaces? Recall that whenever we
have a simple R-module M , then by Schur’s Lemma the set of R-module endomorphisms,
denoted as EndR (M ), is a division ring. Consequently, M is thus not just an mere
EndR (M )-module, but a vector space over EndR (M ). Hence, we can see an example
where the elements of a ring R act on a vector space over a division ring. It is this
specific relation between ring, module, and division ring that Jacobson’s density theorem
explores.
Jacobson’s density theorem connects density and primitive rings. Additionally,
the second Wedderburn-Artin theorem depends heavily on Jacobson’s density theorem.
49
As a sneak peek, know that it is possible to prove that a simple Artinian ring is semi-
simple, and thus by Proposition 5.2.1 it is primitive.
Jacobson Density Theorem. Let R be a primitive ring and let M be a simple, faithful
R-module. If ∆ = EndR (M ), then R is a dense ring of linear transformations of M over
∆.
Proof. Let R be a primitve ring and let M be a simple, faithful R-module. As we have
seen, M is a left vector space over ∆ = EndR (M ). Let V be a finite-dimensional subspace
of M over ∆. We will prove by induction on the dimension of V over ∆ that if V ⊂ M
is of finite dimension over ∆ and m ∈ M , m ∈
/ V , then there exists an r ∈ R such that
rV = (0) but rm 6= 0. Suppose that dim∆ (V ) = 0, then V = (0) meaning m 6= 0. Hence,
since M being simple implies R(M ) 6= (0), there exists r ∈ R such that rm 6= 0. Of
course, V = (0) implies rV = (0). Therefore, the statement is true for the base case
dim∆ (V ) = 0.
Now, suppose V = V0 +∆w, where dim∆ (V0 ) = dim∆ (V )−1 and where w ∈
/ V0 .
By the induction hypothesis, if A(V0 ) = {x ∈ R | xV0 = (0)}, then for y ∈ M , y ∈
/ V0 ,
there is an r ∈ A(V0 ) such that ry 6= 0; Notice this last fact shows that if y ∈
/ V0 , then
A(V0 )y 6= (0) since r ∈ A(V0 ) and ry 6= 0. In particular, since w ∈
/ V0 , then A(V0 )w 6= (0).
Recall that by the mere fact that V0 is a subset of M that A(V0 ) is a left ideal of R.
Therefore, we can show A(V0 )w is a submodule of M . Of course, for any a ∈ A(V0 ) ⊂ R,
aw ∈ M , so A(V0 )w ⊂ M . First, 0 ∈ A(V0 ) implies 0w = 0 ∈ A(V0 )w. Second, suppose
aw ∈ A(V0 )w with a ∈ A(V0 ), then −a ∈ A(V0 ) and (−a)w = −(aw) ∈ A(V0 )w. Third,
let aw, bw ∈ A(V0 )w with a, b ∈ A(V0 ), then aw + bw = (a + b)w ∈ A(V0 )w since
a + b ∈ A(V0 ). Lastly, let r ∈ R and aw ∈ A(V0 )w with a ∈ A(V0 ), then A(V0 ) being
a left ideal of R implies ra ∈ A(V0 ), meaning r(aw) = (ra)w ∈ A(V0 )w, thus A(V0 )w is
an R-module. Since we have confirmed A(V0 )w is a submodule of M , and M is simple,
A(V0 )w 6= (0), we have A(V0 )w = M .
Suppose there is an m ∈ M , m ∈
/ V so that whenever rV = (0) then rm = 0;
We will show this cannot happen. Let m ∈ M , m ∈
/ V , have the property that whenever
rV = (0) then rm = 0. Since M = A(V0 )w, consider the mapping τ : M → M by
τ (aw) = am, for all a in A(V0 ). First, we will show τ is well-defined. Say a1 w = a2 w
for some a1 , a2 ∈ A(V0 ), then (a1 − a2 )w = 0. Note that if v ∈ V with v = v0 + cw,
for some v0 ∈ V0 and c ∈ ∆, then a1 − a2 ∈ A(V0 ) ⊂ R and c ∈ EndR (M ) imply
50
Proof. Let R be a simple Artinian ring. Recall that R being simple implies either J(R) =
(0) or J(R) = R. Since R is Artinian, J(R) is nilpotent. However, by definition, R a
simple ring implies R2 6= (0), meaning R2 = R (R2 is an ideal of R). Therefore, R
is not a nilpotent ideal. Consequently, J(R) cannot be R (if J(R) = R, then R is
nilpotent), so J(R) = (0). Since R has been shown to be both simple and semi-simple,
by Proposition 5.2.1 R is primitive. Let M be a simple, faithful R-module, then M is a
left vector space over the division ring ∆ = EndR (M ).
We will show M is finite-dimensional over ∆ by contradiction. So, suppose that
instead {v1 , . . . , vn , . . .} is an infinite set of linearly independent vectors over ∆. Let
Bm = {v1 , . . . , vm }, then since Bm ⊂ M for all m ∈ N, A(Bm ) are all left ideals of R.
Moreover, by Jacobson’s density theorem, for any i ∈ N, A(Bi+1 ) ( A(Bi ). Of course,
if x ∈ A(Bi+1 ), then xvk = 0 for k = 1, 2, . . . , i + 1, meaning x ∈ A(Bi ). Moreover,
since vi+1 ∈
/ span(Bi ), by density there exists r ∈ R so that r(span(Bi )) = (0), but
rvi 6= 0; in other words, r ∈ A(Bi ) but r ∈
/ A(Bi+1 ). Therefore, A(B1 ) ) A(B2 ) ) · · · )
A(Bm ) ) · · · is a descending chain of proper left ideals of R that has never stops. This
is a contradiction to R being Artinian. Hence, M is finite-dimensional over ∆.
Now that we know M is finite-dimensional over ∆, say dim∆ (M ) = n, we can
prove R is isomorphic to End∆ (M ) and thus isomorphic to M atn (∆op ). Let {v1 , . . . , vn }
be a basis for M and define Φ : R → End∆ (M ) with Φ(r) = Φr and define Φr with
Φr (x) = rx for all x ∈ M . First, we will show Φ is injective. Suppose Φa = Φb , then
ax = bx for all x ∈ M . Hence, (a−b)x = 0 for all x ∈ M , meaning a−b ∈ A(M ). However,
M is a faithful R-module, so a − b ∈ (0), hence a = b. Next, since R acts densely on M ,
we can show Φ is surjective. Let f ∈ End∆ (M ); recall that a function f ∈ End∆ (M ) is
defined by the output of the basis elements under f . Since f (vi ) ∈ M for all i and R is
dense in M , there exists r ∈ R so that rvi = f (vi ) for all i. Since Φr (vi ) = f (vi ) for all
i, Φr and f agree with the outputs of the basis elements. Thus Φr = f . Hence, for any
f ∈ End∆ (M ), there exists r ∈ R so that Φ(r) = f , so Φ is surjective. Now, suppose
a, b ∈ R, then Φ(a + b) = Φa+b = (a + b)x = ax + bx = Φa + Φb = Φ(a) + Φ(b) and
Φ(ab) = Φab = (ab)x = a(bx) = Φa (bx) = Φa (Φb ) = Φa Φb (composition of functions is the
multiplication of End∆ (M )). Hence, Φ is an isomorphism between R and End∆ (M ), so
R∼
= End∆ (M ). Since M atn (∆op ) ∼
= End∆ (M ) by Theorem 2.1.1, then M atn (∆op ) ∼
= R.
Of course, since ∆ is a division ring, so is ∆op . Let D = ∆op , then R ∼
= M atn (D).
52
Now, for the converse, we will show that for any division ring D, M atn (D) is a
simple Artinian ring. Of course, by Theorem 4.2.2 we saw that for any division ring D,
M atn (D) is an Artinian ring. Moreover, by Proposition 2.3.1, M atn (D) is a simple ring.
Hence, M atn (D) is a simple Artinian ring.
53
Chapter 6
Implications
It is time we put our theorems to the test. Since the Wedderburn-Artin theorems
talk about semi-simple Artinian rings, we shall look at some of those. Mainly, we will
observe the impact of these theorems on some finite rings and finite-dimensional group
algebras over fields. These two types of rings exemplify the notion of Artinian rings.
First, we will take a look at Zn . Since Zn is finite for all n ∈ N, it is certainly
Artinian. That is, Zn has a finite number of elements, thus has a finite number of possible
subsets. If the number of subsets is finite, so must the number of ideals. Hence, any chain
of ideals will always stabilize. Now, while all Zn are all Artinian, not all are semi-simple.
Before we dive into theorems about Zn , let us fist observe one Zn that is semi-simple.
After we observe this example, we can then see what drives the theorems we shall explore
soon after.
Example 6.0.1. Consider the Artinian ring Z30 . Since Z30 is commutative and unitary,
all left ideals are two-sided and all ideals are regular. Hence, the Jacobson radical of
Z30 , J(Z30 ), is the intersection of all the maximal ideals of Z30 . The maximal ideals of
Z30 are h2i, h3i, and h5i, hence J(Z30 ) = h2i ∩ h3i ∩ h5i = (0), so Z30 is semi-simple.
Now, to find minimal ideals, we first find an idempotent element of Z30 and construct a
principal ideal with it. To check that is a minimal ideal, we test it using Theorem 4.3.2.
Luckily, finding an idempotent element in Z30 proves to not be as difficult as 6 fits the
bill. Now, let us observe 6Z30 = h6i = {0, 6, 12, 18, 24}. As we can see, h6i is not only
minimal, but a field! Now, as we did in First Wedderburn-Artin theorem, we have Z30 =
6Z30 ⊕ (1 − 6)Z30 . Note that 1 − 6 = 25, so we have Z30 = 6Z30 ⊕ 25Z30 . As it becomes
54
evident, 25Z30 = h25i = {0, 5, 10, 15, 20, 25} is not a minimal ideal. Regardless, what
we must do is now find a minimal ideal of Z30 inside h25i. Note that 10 is also idempotent
in Z30 , and h10i is a minimal ideal of Z30 . Hence, if R1 = h25i, and since 25 is the unity
of R1 , then we have that R1 = 10R1 ⊕ (25 − 10)R1 . Note that if I is an ideal or a ring R
and e ∈ I is an idempotent element of R, then eI = eR: Since I ⊂ R, then eI ⊂ eR. On
the other hand, since e ∈ I, eR ⊂ I. Hence, e(eR) ⊂ e(I); of course, e(eR) = e2 R = eR,
we have eR ⊂ eI. Hence, we can simply say that h25i = h10i ⊕ h(25 − 10)i with respect
to Z30 , not R1 . Finally, (25 − 10)R1 = h15i is also minimal ideal, so we stop the Peirce
decomposition on Z30 . Therefore, we have Z30 = h6i ⊕ h10i ⊕ h15i, the direct sum of
simple (the minimal ideals are fields) Artinian rings.
So, why did Z30 work so nicely? Well, we knew Z30 was Artinian, so that was
half the battle. If we also had that it was semi-simple, our theorems should do the rest
of the work. Why was Z30 semi-simple?
Proof. Let n, m ∈ N and say m|n, where n = mq, q ∈ N. Consider the map φ : Zn → Zm
with φ(a) = a. First, we will show φ is well-defined. Suppose r, s ∈ Zn and r = s.
If so, then r ≡ s mod(n), meaning r − s = nk, where k ∈ Z. Since n = mq, then
r − s = (mq)k = m(qk). Thus, r ≡ s mod(m), so r = s in Zm . Now, suppose a, b ∈ Zn .
Recall that in both Zn and Zm , a + b = a + b and ab = ab. Thus, φ(a + b) = φ(a + b) =
a + b = a + b = φ(a) + φ(b) and φ(ab) = φ(ab) = ab = ab = φ(a)φ(b). Moreover, φ is
surjective. Let y ∈ Zm , then consider y ∈ Zn and note that φ(y) = y. Hence, φ is a ring
homomorphism from Zn to Zm with φ(Zn ) = Zm . Lastly, Kerφ = {a ∈ Zn | φ(a) = 0} =
{a ∈ Zn | m|a} = hmi. Therefore, by the First Isomorphism Theorem, Zn /hmi ∼
= Zm .
Theorem 6.0.1. If n = pk11 · · · pkr r , pi ’s all distinct primes, then J(Zn ) = hp1 · · · pr i.
Proof. Let n = pk11 · · · pkr r , where the pi ’s are all distinct primes. Of course, Zn is a unitary
commutative ring, so J(Zn ) is the intersection of all maximal ideal of Zn by Theorem 3.3.1.
Of course, with Lemma 6.0.1, we can quickly prove that hpi i is maximal in Zn for all i:
Zn /hpi i ∼
= Zpi , so Zn /hpi i being a field implies hpi i is a maximal ideal. Thus, J(Zn ) =
hp1 i ∩ · · · ∩ hpr i. Now, if x ∈ hp1 · · · pr i, then x = (p1 · · · pr )q, where q ∈ Zn . Therefore,
x ∈ hpi i for every i, thus x ∈ hp1 i∩· · ·∩hpr i. Likewise, if x ∈ hp1 i∩· · ·∩hpr i, then x ∈ hpi i
55
for all i. If so, then pi divides x for all i. Hence, p1 · · · pr divides x; say, x = (p1 · · · pr )t,
for t ∈ Z. Hence, x = (p1 · · · pr )t ∈ hp1 · · · pr i, so hp1 · · · pr i = hp1 i ∩ · · · ∩ hpr i = J(Zn ),
as desired.
Hence, we have pinpointed which Zn are semi-simple. Now that we know which
Zn to pick, let us step back and analyze our example on Z30 to generalize on how to
decompose a semi-simple Artinian Zn . First, we know from Lemma 5.1.2 that a semi-
simple Zn will split into direct sums of all its minimal ideals. Particularly, if n = p1 · · · pr ,
then the principal ideals h( pni )i will be all the minimal ideals of Zn (they are all of prime
order). With such knowledge beforehand, the Peirce decomposition is no longer necessary
for the structure of Zn .
Let us move on to group algebras. While Zn shows how to one might approach
finite semi-simple rings in general, group algebras will give us an insight on how to deal
with vector spaces over fields. This time around, looking for the order of ideals will be
rendered useless as a tactic to spot minimal ideals. Instead, ideas from linear algebra
will prove to have a better effect. The reason why we shall explore group algebras as
opposed to random vector spaces is the result of Maschke’s Theorem, which guarantees
semi-simplicity. Of course, let us define what a group algebra is and prove a lemma before
jumping right into the theorem.
Definition 6.0.1. Let F be a field and let G be a finite group of order n. We define
P
the group algebra of G over F , denoted by F (G), as the set { i αi gi | αi ∈ F, gi ∈ G} of
formal sums, where the group elements are considered to be linearly independent over F ,
addition is defined naturally, and multiplication is defined by the distributive laws and
with (αi gi )(βj gj ) = αi βj gi gj .
Proof. Let N ∈ EndD (V ) be (non-zero) nilpotent. We will prove the lemma by induction
on the dimension of V over D.
56
Example 6.0.2. Let hαi be the cyclic group of order 3 generated by α. Consider the
group algebra R(hαi) = {c1 e + c2 α + c3 α2 | ci ∈ R}. By Maschke’s Theorem, we know
e+α+α2
R(hαi) is a semi-simple (and Artinian) ring. Note that the element 3 is a nonzero
e+α+α2
idempotent element of R(hαi). In fact, if we say ζ = 3 , then ζ(c1 e + c2 α + c3 ) =
c1 ζe + c2 ζα + c3 ζα2 = c1 ζ + c2 ζ + c3 ζ = (c1 + c2 + c3 )ζ. In other words, the product of any
58
element of R(hαi) and ζ results in a scalar multiple of ζ. On that note, we can observe
how [R(hαi)]ζ is a field with unity ζ (ζ ∈ R(hαi) implies ζζ = ζ 2 = ζ ∈ [R(hαi)]ζ). By
Theorem 4.3.2, we have that [R(hαi)]ζ is a minimal ideal. Hence, R(hαi) = [R(hαi)]ζ ⊕
[R(hαi)](e − ζ). Sadly, proving [R(hαi)](e − ζ) is a minimal ideal (and a field) is no easy
task, so we will use a different strategy. Recall, R(hαi) is still a vector space over R, so
we shall pick basis B = {ζ, α − e, α2 − α} ; let u1 = ζ, u2 = α − e, and u3 = α2 − α.
Since B is a basis for R(hαi), we can say R(hαi) = SpanR {u1 } ⊕ SpanR {u2 , u3 }. From
our previous conversation on [R(hαi)]ζ, we can see that [R(hαi)]ζ = SpanR {u1 }. Hence,
it must be that SpanR {u2 , u3 } = [R(hαi)](e − ζ). Correspondingly, we will show that
SpanR {u2 , u3 } is a minimal ideal by showing that it does not contain a smaller ideal. Of
course, we will do so by contradiction. Suppose there exists an ideal I of R(hαi) such that
(0) ⊂ I ⊂ SpanR {u2 , u3 } and SpanR {u2 , u3 } 6= I. We can quickly show I is a vector
subspace of R(hαi). By definition, I being an ideal implies it is closed under addition
and contains zero (the zero vector). Now, to show I is closed under scalar multiplication,
let c ∈ R and v ∈ I, then cv = cev = (ce)v. Since ce ∈ R(hαi), v ∈ I, and I an ideal of
R(hαi), we have that (ce)v ∈ I; hence, cv ∈ I. Now, since I is a subspace of R(hαi) and
a proper subset of SpanR {u2 , u3 }, I must be either (0) or a 1-dimensional subspace of
SpanR {u2 , u3 }. If I is a 1-dimensional subspace of SpanR {u2 , u3 }, say the basis of I is
{v} (v 6= 0), it must be that for any x ∈ R(hαi), xv = λv for some λ ∈ R. Of course, if
x ∈ R(hαi), then x is a formal sum of powers of α. Hence, we need only check if there
exists a scalar λ ∈ R such that αv = λv, for some λ ∈ R. Since u2 , u3 form a basis for
SpanR {u2 , u3 }, αu2 = u3 and αu3 = −u2 − u3 , we can say the linear transformation on
SpanR {u2 , u3 } induced by α is
0 −1
Tα =
1 −1
Hence, we must see if there exists scalar λ ∈ R such that Tα v = λv. Upon
a simple calculation of eigenvaules, we see that the characteristic polynomial of Tα is
λ2 + λ + 1, which is irreducible over R. Since no such λ exists in R, I cannot be 1-
dimensional. Hence, I = (0), which means SpanR {u2 , u3 } is a minimal ideal of R(hαi).
Therefore, R(hαi) = [R(hαi)]ζ ⊕ [R(hαi)](e − ζ) is the full Peirce decomposition of R(hαi).
Lastly, to show what these two simple rings are isomorphic to, consider the ring
of endomorphisms for both of the rings. Say, R1 = [R(hαi)]ζ and R2 = [R(hαi)](e − ζ).
59
Recall that R1 , R2 being fields imply that they are primitive rings by Proposition 5.2.3.
As such, they must have simple, faithful modules which can be taken to be R1 and
R2 themselves respectively. On that note, D1 = EndR1 (R1 ) and D2 = EndR2 (R2 )
are division rings by Schur’s Lemma. Lastly, Wedderburn’s second theorem shows that
R1 ∼= EndD1 (R1 ) and R2 ∼= EndD2 (R2 ). Now, we need only consider what EndD1 (R1 )
2 = R1 ∼
and EndD (R2 ) are. For R1 , it is clear to see that R ∼ = EndD (R1 ). As for
1
EndD2 (R2 ), since we need D2 -module homomorphisms on R2 we need only check all of
the linear transformation of R2 that commute with α, or rather Tα . Hence consider the
2 × 2 matrices such that
a b 0 −1 0 −1 a b
=
c d 1 −1 1 −1 c d
In both examples, we had our rings be a direct sum of fields. Such is the
consequence of dealing with commutative rings. In general, if we assume our ring to be
commutative on top of being semi-simple Artinian, then the ring will decompose into a
direct sum of fields. That is, if a ring R is semi-simple Artinian, then the first Wedderburn-
Artin theorem states that it will be a finite direct sum of simple Artinian rings. Then,
Lemma 5.1.2 states that these simple rings are all of the minimal (two-sided) ideals of R.
Next, we observe Corollary 5.1.1 which states that every minimal ideal of R is of the form
Rei for some idempotent elements ei or R. Lastly, we note Theorem 4.3.2 and recall that
in the commutative case, both Re = eRe and eRe would be a commutative division ring.
Hence, the minimal ideals Rei would all be fields, meaning R is a direct sum of fields.
Our group algebra example also came out this way since F (G) is commutative if and
only if both F and G are commutative. Hence, one could try an example with a group G
60
that is not commutative and have simple modules that are not fields. Sadly, this is the
end of the road. We have proved the Wedderburn-Artin theorems and have given quick
examples of the theorems in action. While we could explore the results of the theorems
in more dept, that sounds like a paper for another day. Quite honestly, this paper cold
have ended with chapter 5, but I figured a quick display of our theorems would be of
great use. Now, taking the examples we have covered thus far, the reader can explore
semi-simple rings on their own if they wish to. Good luck!
61
Bibliography
[C] Curtis, C. W., Reiner, I. Representation Theory of Finite Groups and Associative
Algebras. Wiley, 1988
[Hun] Hungerford, T., Algebra. Graduate Texts in Mathematics, Vol. 73, Springer-
Verlag, New York, 1974.
[J] Jacobson, N., Basic Algebra II, W. H. Freeman and Company, San Francisco,
1980.