0% found this document useful (0 votes)
47 views23 pages

Matrix (Mathematics)

A matrix is a rectangular array of numbers or symbols arranged in rows and columns, widely used in mathematics, particularly in linear algebra and geometry for representing mathematical objects and transformations. Basic operations such as addition, multiplication, and transposition can be performed on matrices, and they play a crucial role in solving systems of linear equations and representing linear transformations. Square matrices, which have equal numbers of rows and columns, are particularly important in matrix theory, with properties such as determinants and eigenvalues being fundamental to their study.

Uploaded by

gireeshply
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views23 pages

Matrix (Mathematics)

A matrix is a rectangular array of numbers or symbols arranged in rows and columns, widely used in mathematics, particularly in linear algebra and geometry for representing mathematical objects and transformations. Basic operations such as addition, multiplication, and transposition can be performed on matrices, and they play a crucial role in solving systems of linear equations and representing linear transformations. Square matrices, which have equal numbers of rows and columns, are particularly important in matrix theory, with properties such as determinants and eigenvalues being fundamental to their study.

Uploaded by

gireeshply
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Matrix (mathematics)

In mathematics, a matrix (pl.: matrices) is a rectangular array or table of


numbers, symbols, or expressions, with elements or entries arranged in rows
and columns, which is used to represent a mathematical object or property of
such an object.

For example,

is a matrix with two rows and three columns. This is often referred to as a An m × n matrix: the m rows are
"two-by-three matrix", a "⁠ ⁠matrix", or a matrix of dimension ⁠ ⁠. horizontal and the n columns are vertical.
Each element of a matrix is often denoted
Matrices are commonly used in linear algebra, where they represent linear by a variable with two subscripts. For
example, a2,1 represents the element at
maps. In geometry, matrices are widely used for specifying and representing
the second row and first column of the
geometric transformations (for example rotations) and coordinate changes.
matrix.
In numerical analysis, many computational problems are solved by reducing
them to a matrix computation, and this often involves computing with
matrices of huge dimensions. Matrices are used in most areas of mathematics and scientific fields, either directly, or
through their use in geometry and numerical analysis.

Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square
matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a
noncommutative ring.[1] The determinant of a square matrix is a number associated with the matrix, which is
fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero
determinant and the eigenvalues of a square matrix are the roots of a polynomial determinant.

Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of
linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.

Definition
A matrix is a rectangular array of numbers (or other mathematical objects), called the "entries" of the matrix.
Matrices are subject to standard operations such as addition and multiplication.[2] Most commonly, a matrix over a
field is a rectangular array of elements of ⁠ ⁠.[3][4] A real matrix and a complex matrix are matrices whose
entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For
instance, this is a real matrix:

The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical
lines of entries in a matrix are respectively called rows and columns.[5]

Size
The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the number of
rows and columns that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with
rows and columns is called an matrix,[5] or -by- matrix,[6] where and are called its dimensions.[7]
For example, the matrix above is a matrix.
Matrices with a single row are called row matrices or row vectors, and those with a single column are called column
matrices or column vectors. A matrix with the same number of rows and columns is called a square matrix.[8] A
matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as
computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.[9]

Overview of a matrix size


Name Size Example Description

Row matrix 1×n A matrix with one row, sometimes used to represent a vector

Column
n×1 A matrix with one column, sometimes used to represent a vector
matrix

Square A matrix with the same number of rows and columns, sometimes used to represent a linear
n×n
matrix transformation from a vector space to itself, such as reflection, rotation, or shearing.

Notation
The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in
square brackets or parentheses,[10] so that an matrix is represented as

This may be abbreviated by writing only a single generic term, possibly along with indices, as in

or in the case that ⁠ ⁠.

Matrices are usually symbolized using upper-case letters (such as in the examples above),[11] while the
corresponding lower-case letters, with two subscript indices (e.g., ⁠ ⁠, or ⁠ ⁠), represent the entries.[12] In addition
to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface
Roman (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation
involves the use of a double-underline with the variable name, with or without boldface style, as in ⁠ ⁠.[13]

The entry in the ith row and jth column of a matrix A is sometimes referred to as the or entry of the matrix,
and commonly denoted by or ⁠ ⁠.[14] Alternative notations for that entry are and ⁠ ⁠. For example, the
entry of the following matrix is 5 (also denoted ⁠ ⁠, ⁠ ⁠, or ⁠ ⁠):

Sometimes, the entries of a matrix can be defined by a formula such as ⁠ ⁠. For example, each of the
entries of the following matrix is determined by the formula ⁠ .⁠

In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses.
For example, the matrix above is defined as or ⁠ ⁠. If matrix size is ⁠ ⁠, the above-
mentioned formula is valid for any and any ⁠ ⁠. This can be specified separately or
indicated using as a subscript. For instance, the matrix above is ⁠ ⁠, and can be defined as
or ⁠ ⁠.
Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an m-by-n matrix.
Some programming languages start the numbering of array indexes at zero, in which case the entries of an m-by-n
matrix are indexed by and ⁠ ⁠.[15] This article follows the more common convention in
mathematical writing where enumeration starts from 1.

The set of all m-by-n real matrices is often denoted ⁠ ⁠, or ⁠ ⁠. The set of all m-by-n matrices over
another field, or over a ring R, is similarly denoted ⁠ ⁠, or ⁠ ⁠. If m = n, such as in the case of
square matrices, one does not repeat the dimension: ⁠ ⁠, or ⁠ ⁠. [16] Often, ⁠ ⁠, or ⁠ ⁠, is used in place of
⁠.[17]

Basic operations
Several basic operations can be applied to matrices. Some, such as transposition and submatrix do not depend on
the nature of the entries. Others, such as matrix addition, scalar multiplication, matrix multiplication, and row
operations involve operations on matrix entries and therefore require that matrix entries are numbers or belong to a
field or a ring.[18]

In this section, it is supposed that matrix entries belong to a fixed ring, which is typically a field of numbers.

Addition, scalar multiplication, subtraction and transposition

Addition

The sum A + B of two m×n matrices A and B is calculated entrywise:[19]

For example,

Scalar multiplication

The product cA of a number c (also called a scalar in this context) and a matrix A is computed by multiplying every
entry of A by c:[20]

This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since
"scalar product" is often used as a synonym for "inner product".[21] For example:

Subtraction

The subtraction of two m×n matrices is defined by composing matrix addition with scalar multiplication by –1:

Transposition

The transpose of an m×n matrix A is the n×m matrix AT (also denoted Atr or tA) formed by turning rows into
columns and vice versa:
For example:

Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is,
the matrix sum does not depend on the order of the summands: A + B = B + A.[22] The transpose is compatible
with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally,
(AT)T = A.

Matrix multiplication
Multiplication of two matrices is defined if and only if the number of
columns of the left matrix is the same as the number of rows of the
right matrix. If A is an m×n matrix and B is an n×p matrix, then their
matrix product AB is the m×p matrix whose entries are given by the
dot product of the corresponding row of A and the corresponding
column of B:[23]

Schematic depiction of the matrix product AB of


where 1 ≤ i ≤ m and 1 ≤ j ≤ p.[24] For example, the underlined entry two matrices A and B
2340 in the product is calculated as
(2 × 1000) + (3 × 100) + (4 × 10) = 2340:

Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and (A + B)C = AC + BC as well as
C(A + B) = CA + CB (left and right distributivity), whenever the size of the matrices is such that the various
products are defined.[25] The product AB may be defined without BA being defined, namely if A and B are m×n and
n×k matrices, respectively, and m ≠ k. Even if both products are defined, they generally need not be equal, that is:[26]

In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex)
numbers, whose product is independent of the order of the factors.[23] An example of two matrices not commuting
with each other is:

whereas

Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can
be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product.[27] They
arise in solving matrix equations such as the Sylvester equation.

Row operations
There are three types of row operations:[28][29]

1. row addition, that is, adding a row to another.


2. row multiplication, that is, multiplying all entries of a row by a non-zero constant;
3. row switching, that is, interchanging two rows of a matrix;
These operations are used in several ways, including solving linear equations and finding matrix inverses with Gauss
elimination and Gauss–Jordan elimination, respectively.[30]

Submatrix
A submatrix of a matrix is a matrix obtained by deleting any collection of rows and/or columns.[31][32][33] For
example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:

The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.[33][34]

A principal submatrix is a square submatrix obtained by removing certain rows and columns. The definition
varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row
indices that remain is the same as the set of column indices that remain.[35][36] Other authors define a principal
submatrix as one in which the first k rows and columns, for some number k, are the ones that remain;[37] this type of
submatrix has also been called a leading principal submatrix.[38]

Linear equations
Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations.
For example, if A is an m×n matrix, x designates a column vector (that is, n×1-matrix) of n variables x1, x2, ..., xn,
and b is an m×1-column vector, then the matrix equation

is equivalent to the system of linear equations[39]

Using matrices, this can be solved more compactly than would be possible by writing out all the equations separately.
If n = m and the equations are independent, then this can be done by writing[40]

where A−1 is the inverse matrix of A. If A has no inverse, solutions—if any—can be found using its generalized
inverse.

Linear transformations
Matrices and matrix multiplication reveal their essential features when related to linear transformations, also
known as linear maps. A real m-by-n matrix A gives rise to a linear transformation mapping each vector
x in ⁠ ⁠ to the (matrix) product Ax, which is a vector in ⁠ ⁠ Conversely, each linear transformation
arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f (ej), where
ej = (0, ..., 0, 1, 0, ..., 0) is the unit vector with 1 in the jth position and 0 elsewhere. The matrix A is said to
represent the linear map f, and A is called the transformation matrix of f.[41]

For example, the 2×2 matrix

can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d),
and (c, d). The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors ⁠ ⁠,
⁠, ⁠ ⁠, and ⁠ ⁠ in turn. These vectors define the vertices of the unit
square.[42] The following table shows several 2×2 real matrices with the
associated linear maps of ⁠ ⁠. The blue original is mapped to the green grid
and shapes. The origin (0, 0) is marked with a black point.

The vectors represented by a 2-by-2


matrix correspond to the sides of a unit
square transformed into a parallelogram.

Horizontal shear[43] Reflection through the Squeeze mapping Scaling Rotation


with m = 1.25. vertical axis with r = 3/2 by a factor of 3/2 by π/6 = 30°

Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to
composition of maps:[44] if a k-by-m matrix B represents another linear map ⁠ ⁠, then the composition
g ∘ f is represented by BA since [45]

The last equality follows from the above-mentioned associativity of matrix multiplication.

The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same
as the maximum number of linearly independent column vectors.[46] Equivalently it is the dimension of the image of
the linear map represented by A.[47] The rank–nullity theorem states that the dimension of the kernel of a matrix
plus the rank equals the number of columns of the matrix.[48]

Square matrix
A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square
matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the
main diagonal of a square matrix. They lie on the imaginary line running from the top left corner to the bottom right
corner of the matrix.[49]

Main types
Diagonal and triangular matrix Name Example with n = 3
If all entries of A below the main diagonal are zero, A is called an upper
triangular matrix. Similarly, if all entries of A above the main diagonal Diagonal matrix
are zero, A is called a lower triangular matrix.[50] If all entries outside
the main diagonal are zero, A is called a diagonal matrix.[51]
Lower triangular matrix
Identity matrix
The identity matrix In of size n is the n-by-n matrix in which all the
elements on the main diagonal are equal to 1 and all other elements are Upper triangular matrix
equal to 0,[52] for example,

It is a square matrix of order n, and also a special kind of diagonal matrix. It is called an identity matrix because
multiplication with it leaves a matrix unchanged:[52]

for any m-by-n matrix A.

A scalar multiple of an identity matrix is called a scalar matrix.[53]

Symmetric or skew-symmetric matrix


A square matrix A that is equal to its transpose, that is, A = AT, is a symmetric matrix. If instead, A is equal to the
negative of its transpose, that is, A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is
often replaced by the concept of Hermitian matrices, which satisfies A∗ = A, where the star or asterisk denotes the
conjugate transpose of the matrix, that is, the transpose of the complex conjugate of A.[54]

By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every
vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.[55] This theorem
can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns.[56]

Invertible matrix and its inverse


A square matrix A is called invertible or non-singular if there exists a matrix B such that[57][58]

where In is the n×n identity matrix with 1 for each entry on the main diagonal and 0 elsewhere. If B exists, it is
unique and is called the inverse matrix of A, denoted A−1.[59]

There are many algorithms for testing whether a square matrix is invertible, and, if it is, computing its inverse. One
of the oldest, which is still in common use is Gaussian elimination.[60]

Definite matrix
A symmetric real matrix A is called positive-definite if the associated quadratic form

has a positive value for every nonzero vector x in ⁠ ⁠. If f (x) yields only negative values then A is negative-definite; if
f does produce both negative and positive values then A is Positive definite matrix Indefinite matrix
indefinite.[61] If the quadratic form f yields only non-negative
values (positive or zero), the symmetric matrix is called
positive-semidefinite (or if only non-positive values, then
negative-semidefinite); hence the matrix is indefinite
precisely when it is neither positive-semidefinite nor
negative-semidefinite.[62]

A symmetric matrix is positive-definite if and only if all its


eigenvalues are positive, that is, the matrix is positive-
semidefinite and it is invertible.[63] The table at the right
shows two possibilities for 2-by-2 matrices. The eigenvalues
of a diagonal matrix are simply the entries along the Points such that
diagonal,[64] and so in these examples, the eigenvalues can (Ellipse)
Points such that
be read directly from the matrices themselves. The first (Hyperbola)
matrix has two eigenvalues that are both positive, while the
second has one that is positive and another that is negative.

Allowing as input two different vectors instead yields the bilinear form associated to A:[65]

In the case of complex matrices, the same terminology and results apply, with symmetric matrix, quadratic form,
bilinear form, and transpose xT replaced respectively by Hermitian matrix, Hermitian form, sesquilinear form, and
conjugate transpose xH.[66]

Orthogonal matrix
An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (that
is, orthonormal vectors).[67] Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:

which entails

where In is the identity matrix of size n.[68]

An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal
(A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an
orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is
a pure rotation without reflection, i.e., the transformation preserves the orientation of the transformed structure,
while every orthogonal matrix with determinant −1 reverses the orientation, i.e., is a composition of a pure reflection
and a (possibly null) rotation. The identity matrices have determinant 1 and are pure rotations by an angle zero.[69]

The complex analog of an orthogonal matrix is a unitary matrix.[70]

Main operations

Trace
The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not
commutative as mentioned above, the trace of the product of two matrices is independent of the order of the
factors:[71]
This is immediate from the definition of matrix multiplication:[72]

It follows that the trace of the product of more than two matrices is independent of cyclic permutations of the
matrices; however, this does not in general apply for arbitrary permutations. For example, tr(ABC) ≠ tr(BAC), in
general.[73] Also, the trace of a matrix is equal to that of its transpose,[74] that is,

Determinant
The determinant of a square matrix A (denoted det(A) or |A |) is a
number encoding certain properties of the matrix. A matrix is
invertible if and only if its determinant is nonzero.[75] Its absolute
value equals the area (in ⁠ ⁠) or volume (in ⁠ ⁠) of the image of the
unit square (or cube), while its sign corresponds to the orientation of
the corresponding linear map: the determinant is positive if and only
if the orientation is preserved.[76] A linear transformation on ⁠ ⁠given by the
indicated matrix. The determinant of this matrix is
The determinant of 2-by-2 matrices is given by[77] −1, as the area of the green parallelogram at the
right is 1, but the map reverses the orientation,
since it turns the counterclockwise orientation of
the vectors to a clockwise one.

The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus).


The more lengthy Leibniz formula generalizes these two formulae to
all dimensions.[78]

The determinant of a product of square matrices equals the product of their determinants:

or using alternate notation:[79]

Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the
determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.[80] Using these
operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices, the
determinant equals the product of the entries on the main diagonal; this provides a method to calculate the
determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is,
determinants of smaller matrices.[81] This expansion can be used for a recursive definition of determinants (taking as
starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0
matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear
systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the
value of each of the system's variables.[82]

Eigenvalues and eigenvectors


A number and a nonzero vector v satisfying

are called an eigenvalue and an eigenvector of A, respectively.[83][84] The number λ is an eigenvalue of an n×n-
matrix A if and only if (A − λIn) is not invertible, which is equivalent to[85] The polynomial pA in
an indeterminate X given by evaluation of the determinant det(X In − A) is called the characteristic polynomial of A.
It is a monic polynomial of degree n. Therefore the polynomial equation pA(λ) = 0 has at most n different solutions,
that is, eigenvalues of the matrix.[86] They may be complex even if the entries of A are real. According to the Cayley–
Hamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its characteristic polynomial
yields the zero matrix.[87]
Computational aspects
Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct
algorithms and iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a
sequence of vectors xn converging to an eigenvector when n tends to infinity.[88]

To choose the most appropriate algorithm for each specific problem, it is important to determine both the
effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical
linear algebra.[89] As with other numerical situations, two main aspects are the complexity of algorithms and their
numerical stability.

Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary
operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example,
multiplication of matrices. Calculating the matrix product of two n-by-n matrices using the definition given above
needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The Strassen
algorithm outperforms this "naive" algorithm; it needs only n2.807 multiplications.[90] Theoretically faster but
impractical matrix multiplication algorithms have been developed,[91] as have speedups to this problem using
parallel algorithms or distributed computation systems such as MapReduce.[92]

In many practical situations, additional information about the matrices involved is known. An important case is
sparse matrices, that is, matrices whose entries are mostly zero. There are specifically adapted algorithms for, say,
solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method.[93]

An algorithm is, roughly speaking, numerically stable if little deviations in the input values do not lead to big
deviations in the result. For example, one can calculate the inverse of a matrix by computing its adjugate matrix:

However, this may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a
matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix's inverse.[94]

Decomposition
There are several methods to render matrices into a more easily accessible form. They are generally referred to as
matrix decomposition or matrix factorization techniques. These techniques are of interest because they can make
computations easier.

The LU decomposition factors matrices as a product of lower (L) and an upper triangular matrices (U).[95] Once this
decomposition is calculated, linear systems can be solved more efficiently by a simple technique called forward and
back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaussian
elimination is a similar algorithm; it transforms any matrix to row echelon form.[96] Both methods proceed by
multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding
multiples of one row to another row. Singular value decomposition expresses any matrix A as a product UDV∗,
where U and V are unitary matrices and D is a diagonal matrix.[97]

The eigendecomposition or diagonalization expresses A as a product VDV−1, where D is a diagonal matrix and V is
a suitable invertible matrix.[98] If A can be written in this form, it is called diagonalizable. More generally, and
applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say
matrices whose only nonzero entries are the eigenvalues λ1 to λn of A, placed on the main diagonal and possibly
entries equal to one directly above the main diagonal, as shown at the right.[99] Given the eigendecomposition, the
nth power of A (that is, n-fold iterated matrix multiplication) can be calculated via

and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries,
which is much easier than doing the exponentiation for A instead. This can be used to compute the matrix
exponential eA, a need frequently arising in solving linear differential
equations, matrix logarithms and square roots of matrices.[100] To avoid
numerically ill-conditioned situations, further algorithms such as the Schur
decomposition can be employed.[101]

Abstract algebraic aspects and


generalizations
Matrices can be generalized in different ways. Abstract algebra uses matrices
with entries in more general fields or even rings, while linear algebra codifies
properties of matrices in the notion of linear maps. It is possible to consider An example of a matrix in Jordan normal
matrices with infinitely many columns and rows. Another extension is form. The grey blocks are called Jordan
blocks.
tensors, which can be seen as higher-dimensional arrays of numbers, as
opposed to vectors, which can often be realized as sequences of numbers,
while matrices are rectangular or two-dimensional arrays of numbers.[102] Matrices, subject to certain requirements
tend to form groups known as matrix groups. Similarly under certain conditions matrices form rings known as
matrix rings. Though the product of matrices is not in general commutative certain matrices form fields known as
matrix fields. In general, matrices and their multiplication also form a category, the category of matrices.

Matrices with more general entries


This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered
with much more general types of entries than real or complex numbers. As a first step of generalization, any field,
that is, a set where addition, subtraction, multiplication, and division operations are defined and well-behaved, may
be used instead of ⁠ ⁠ or ⁠ ⁠, for example rational numbers or finite fields. For example, coding theory makes use of
matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist
only in a larger field than that of the entries of the matrix; for instance, they may be complex in the case of a matrix
with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (for example, to
view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square
matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an
algebraically closed field, such as ⁠ ⁠from the outset.

More generally, matrices with entries in a ring R are widely used in mathematics.[2] Rings are a more general notion
than fields in that a division operation need not exist. The very same addition and multiplication operations of
matrices extend to this setting, too. The set M(n, R) (also denoted Mn(R)[16]) of all square n-by-n matrices over R is
a ring called matrix ring, isomorphic to the endomorphism ring of the left R-module Rn.[103] If the ring R is
commutative, that is, its multiplication is commutative, then the ring M(n, R) is also an associative algebra over R.
The determinant of square matrices over a commutative ring R can still be defined using the Leibniz formula; such a
matrix is invertible if and only if its determinant is invertible in R, generalizing the situation over a field F, where
every nonzero element is invertible.[104] Matrices over superrings are called supermatrices.[105]

Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common
case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need
not be square matrices, and thus need not be members of any ring; but their sizes must fulfill certain compatibility
conditions.

Relationship to linear maps


Linear maps are equivalent to m-by-n matrices, as described above. More generally, any linear map
f : V → W between finite-dimensional vector spaces can be described by a matrix A = (aij), after choosing bases
v1, ..., vn of V, and w1, ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that

In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation
uniquely determines the entries of the matrix A. The matrix depends on the choice of the bases: different choices of
bases give rise to different, but equivalent matrices.[106] Many of the above concrete notions can be reinterpreted in
this light, for example, the transpose matrix AT describes the transpose of the linear map given by A, concerning the
dual bases.[107]

These properties can be restated more naturally: the category of matrices with entries in a field with multiplication
as composition is equivalent to the category of finite-dimensional vector spaces and linear maps over this field.[108]

More generally, the set of m×n matrices can be used to represent the R-linear maps between the free modules Rm
and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to
the matrix ring of n×n matrices representing the endomorphism ring of Rn.

Matrix groups
A group is a mathematical structure consisting of a set of objects together with a binary operation, that is, an
operation combining any two objects to a third, subject to certain requirements.[109] A group in which the objects are
matrices and the group operation is matrix multiplication is called a matrix group.[110][111] Since a group of every
element must be invertible, the most general matrix groups are the groups of all invertible matrices of a given size,
called the general linear groups.

Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix
groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (that is, a smaller
group contained in) their general linear group, called a special linear group.[112] Orthogonal matrices, determined by
the condition

form the orthogonal group.[113] Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with
determinant 1 form a subgroup called the special orthogonal group.

Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the
symmetric group.[114] General groups can be studied using matrix groups, which are comparatively well understood,
using representation theory.[115]

Infinite matrices
It is also possible to consider matrices with infinitely many rows and/or columns.[116] The basic operations
introduced above are defined the same way in this case. Matrix multiplication, however, and all operations stemming
therefrom are only meaningful when restricted to certain matrices, since the sum featuring in the above definition of
the matrix product will contain an infinity of summands. An easy way to circumvent this issue is to restrict to
matrices all of whose rows (or columns) contain only finitely many nonzero terms. As in the finite case (see above),
where matrices describe linear maps, infinite matrices can be used to describe operators on Hilbert spaces, where
convergence and continuity questions arise. However, the explicit point of view of matrices tends to obfuscate the
matter,[117] and the abstract and more powerful tools of functional analysis are used instead, by relating matrices to
linear maps (as in the finite case above), but imposing additional convergence and continuity constraints.

Empty matrix
An empty matrix is a matrix in which the number of rows or columns (or both) is zero.[118][9] Empty matrices help to
deal with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then
AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-
by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating
and computing with them. The determinant of the 0-by-0 matrix is 1 as follows regarding the empty product
occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity
map from any finite-dimensional space to itself has determinant 1, a fact that is often used as a part of the
characterization of determinants.
Applications
There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take
advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and
economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of
strategies the players choose.[119] Text mining and automated thesaurus compilation makes use of document-term
matrices such as tf-idf to track frequencies of certain words in several documents.[120]

Complex numbers can be represented by particular real 2-by-2 matrices via

under which addition and multiplication of complex numbers and matrices correspond to each other. For example,
2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A
similar interpretation is possible for quaternions[121] and Clifford algebras in general.

Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices,
these codes are comparatively easy to break.[122] Computer graphics uses matrices to represent objects; to calculate
transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional
object onto a two-dimensional screen, corresponding to a theoretical camera observation; and to apply image
convolutions such as sharpening, blurring, edge detection, and more.[123] Matrices over a polynomial ring are
important in the study of control theory.[124]

Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular
bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan
equations to obtain the molecular orbitals of the Hartree–Fock method.[125]

Graph theory
The adjacency matrix of a finite graph is a basic notion of graph theory.[126] It records
which vertices of the graph are connected by an edge. Matrices containing just two different
values (1 and 0 meaning for example "yes" and "no", respectively) are called logical
matrices. The distance (or cost) matrix contains information about the distances of the
edges.[127] These concepts can be applied to websites connected by hyperlinks,[128] or cities
connected by roads etc., in which case (unless the connection network is extremely dense)
the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically
tailored matrix algorithms can be used in network theory.[129] An undirected graph
with adjacency matrix:

Analysis and geometry


The Hessian matrix of a differentiable function consists of the second
derivatives of ƒ concerning the several coordinate directions, that is,[130]

It encodes information about the local growth behavior of the function: given a critical point x = (x1, ..., xn), that is,
a point where the first partial derivatives of f vanish, the function has a local minimum if the Hessian matrix
is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions
closely related to the ones attached to matrices (see above).[131]

Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map ⁠ ⁠.
If f1, ..., fm denote the components of f, then the Jacobi matrix is defined as[132]
If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is
locally invertible at that point, by the implicit function theorem.[133]

Partial differential equations can be classified by considering the matrix of


coefficients of the highest-order differential operators of the equation. For
elliptic partial differential equations this matrix is positive definite, which
has a decisive influence on the set of possible solutions of the equation in
question.[134]

The finite element method is an important numerical method to solve partial


At the saddle point (x = 0, y = 0) (red) of differential equations, widely applied in simulating complex physical
the function f (x,−y) = x2 − y2, the systems. It attempts to approximate the solution to some equation by
Hessian matrix is indefinite. piecewise linear functions, where the pieces are chosen concerning a
sufficiently fine grid, which in turn can be recast as a matrix equation.[135]

Probability theory and statistics


Stochastic matrices are square matrices whose rows are probability
vectors, that is, whose entries are non-negative and sum up to one.
Stochastic matrices are used to define Markov chains with finitely many
states.[136] A row of the stochastic matrix gives the probability
distribution for the next position of some particle currently in the state
that corresponds to the row. Properties of the Markov chain—like
absorbing states, that is, states that any particle attains eventually—can
be read off the eigenvectors of the transition matrices.[137]

Statistics also makes use of matrices in many different forms.[138]


Descriptive statistics is concerned with describing data sets, which can
Two different Markov chains. The chart depicts
often be represented as data matrices, which may then be subjected to the number of particles (of a total of 1000) in
dimensionality reduction techniques. The covariance matrix encodes the state "2". Both limiting values can be
mutual variance of several random variables.[139] Another technique determined from the transition matrices, which
using matrices are linear least squares, a method that approximates a are given by (red) and (black).
finite set of pairs (x1, y1), (x2, y2), ..., (xN, yN), by a linear function

which can be formulated in terms of matrices, related to the singular value decomposition of matrices.[140]

Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such
as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory
to physics.[141][142]

Quantum mechanics and particle physics


The first model of quantum mechanics (Heisenberg, 1925) used infinite-dimensional matrices to define the operators
that took over the role of variables like position, momentum and energy from classical physics.[143] (This is
sometimes referred to as matrix mechanics.[144]) Matrices, both finite and infinite-dimensional, have since been
employed for many purposes in quantum mechanics. One particular example is the density matrix, a tool used in
calculating the probabilities of the outcomes of measurements performed on physical systems.[145][146]

Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary
particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more
specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more
general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.[147]
For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3);
for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are
also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions,
quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic
quark states that are important for weak interactions are not the same as, but linearly related to the basic quark
states that define particles with specific and distinct masses.[148]

Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of
experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting
particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles
as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing
particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information
about the possible interactions between particles.[149]

Normal modes
A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of
motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give
the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way
to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation.
Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of
systems consisting of mutually bound component atoms.[150] They are also needed for describing mechanical
vibrations, and oscillations in electrical circuits.[151]

Geometrical optics
Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is
neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by
optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as
multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's
components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of
the optical element. There are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens
surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface,
where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective
elements, is simply described by the matrix resulting from the product of the components' matrices.[152]

Electronics
Electronic circuits that are composed of linear components (such as resistors, inductors and capacitors) obey
Kirchhoff's circuit laws, which leads to a system of linear equations, which can be described with a matrix equation
that relates the source currents and voltages to the resultant currents and voltages at each point in the circuit, and
where the matrix entries are determined by the circuit.[153]

History
Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s.
The Chinese text The Nine Chapters on the Mathematical Art written in the 10th–2nd century BCE is the first
example of the use of array methods to solve simultaneous equations,[154] including the concept of determinants. In
1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published Ars Magna.[155]
The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683.[156] The
Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves
(1659).[157] Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information
or solutions and experimented with over 50 different systems of arrays.[155] Cramer presented his rule in
1750.[158][159]

The term "matrix" (Latin for "womb", "dam" (non-human female animal kept for breeding), "source", "origin", "list",
and "register", are derived from mater—mother[160]) was coined by James Joseph Sylvester in 1850,[161] who
understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants
of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester
explains:[162]
I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems
of determinants may be engendered from the womb of a common parent.

Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the
coefficients being investigated as had previously been done. Instead, he defined operations such as addition,
subtraction, multiplication, and division as transformations of those matrices and showed the associative and
distributive properties held. Cayley investigated and demonstrated the non-commutative property of matrix
multiplication as well as the commutative property of matrix addition.[155] Early matrix theory had limited the use of
arrays almost exclusively to determinants and Cayley's abstract matrix operations were revolutionary. He was
instrumental in proposing a matrix concept independent of equation systems. In 1858, Cayley published his A
memoir on the theory of matrices[163][164] in which he proposed and demonstrated the Cayley–Hamilton
theorem.[155]

The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in
1913 and he simultaneously demonstrated the first significant use of the notation A = [ai,j] to represent a matrix
where ai,j refers to the ith row and the jth column.[155]

The modern study of determinants sprang from several sources.[165] Number-theoretical problems led Gauss to
relate coefficients of quadratic forms, that is, expressions such as x2 + xy − 2y2, and linear maps in three dimensions
to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix
products are non-commutative. Cauchy was the first to prove general statements about determinants, using as the
definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by aj,k in the polynomial

where denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric
matrices are real.[166] Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—
which can be used to describe geometric transformations at a local (or infinitesimal) level, see above. Kronecker's
Vorlesungen über die Theorie der Determinanten[167] and Weierstrass's Zur Determinantentheorie,[168] both
published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such
as the mentioned formula of Cauchy. At that point, determinants were firmly established.[169][165]

Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was
proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius,
working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century, the
Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Wilhelm
Jordan. In the early 20th century, matrices attained a central role in linear algebra,[170] partially due to their use in
the classification of the hypercomplex number systems of the previous century.[171]

The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many
rows and columns.[172] Later, von Neumann carried out the mathematical formulation of quantum mechanics, by
further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly
speaking, correspond to Euclidean space, but with an infinity of independent directions.[173]

Other historical usages of the word "matrix" in mathematics


The word has been used in unusual ways by at least two authors of historical importance.

Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word "matrix" in
the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower
type, successively, so that at the "bottom" (0 order) the function is identical to its extension:[174]

Let us give the name of matrix to any function, of however many variables, that does not involve any
apparent variables. Then, any possible function other than a matrix derives from a matrix using
generalization, that is, by considering the proposition that the function in question is true with all possible
values or with some value of one of the arguments, the other argument or arguments remaining
undetermined.
For example, a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single
variable, such as y, by "considering" the function for all possible values of "individuals" ai substituted in place of a
variable x. And then the resulting collection of functions of the single variable y, that is, ∀ai: Φ(ai, y), can be reduced
to a "matrix" of values by "considering" the function for all possible values of "individuals" bi substituted in place of
variable y:

Alfred Tarski in his 1941 Introduction to Logic used the word "matrix" synonymously with the notion of truth table
as used in mathematical logic.[175]

See also

Mathematics portal

List of named matrices


Gram–Schmidt process – Orthonormalization of a set of vectors
Irregular matrix
Matrix calculus – Specialized notation for multivariable calculus
Matrix function – Function that maps matrices to matrices

Notes
1. Reyes (2025). 18. Brown (1991), Definition I.2.1 (addition), Definition
2. Lang (2002), Chapter XIII. I.2.4 (scalar multiplication), and Definition I.2.33
(transpose).
3. Fraleigh (1976), p. 209.
19. Whitelaw (1991), p. 29.
4. Nering (1970), p. 37.
5. Brown (1991), p. 1. 20. Whitelaw (1991), p. 30.
6. Golub & Van Loan (1996), p. 3. 21. Maxwell (1969), p. 46 (https://books.google.com/book
s?id=oQk9AAAAIAAJ&pg=PA46).
7. Horn & Johnson (1985), p. 5.
22. Brown (1991), Theorem I.2.6.
8. Gbur (2011), p. 89.
23. Lancaster & Tismenetsky (1985), p. 9 (https://books.g
9. "A matrix having at least one dimension equal to zero oogle.com/books?id=4nfNCgAAQBAJ&pg=PA9).
is called an empty matrix", MATLAB Data Structures
24. Brown (1991), Definition I.2.20.
(https://system.nada.kth.se/unix/software/matlab/Rele
ase_14.1/techdoc/matlab_prog/ch_dat29.html) 25. Brown (1991), Theorem I.2.24.
Archived (https://web.archive.org/web/200912281026 26. Boas (2005), p. 117.
53/http://www.system.nada.kth.se/unix/software/matl 27. Horn & Johnson (1985), Ch. 4 and 5.
ab/Release_14.1/techdoc/matlab_prog/ch_dat29.htm 28. Perrone (2024), p. 119–120 (http://books.google.com/
l) 2009-12-28 at the Wayback Machine
books?id=JO8GEQAAQBAJ&pg=PA119).
10. Ramachandra Rao & Bhimasankaram (2000), p. 71
29. Lang (1986), p. 71 (http://books.google.com/books?id
(https://books.google.com/books?id=ZfJdDwAAQBAJ
=c_NEBAAAQBAJ&pg=PA71).
&pg=PA71).
30. Watkins (2002), p. 102 (http://books.google.com/book
11. Hamilton (1987), p. 29 (https://books.google.com/boo
s?id=xi5omWiQ-3kC&pg=PA102).
ks?id=W5o4AAAAIAAJ&pg=PA29).
31. Bronson (1970), p. 16.
12. Gentle (1998), pp. 52–53 (https://books.google.com/b
ooks?id=2J0ndF_LmqoC&pg=PA52). 32. Kreyszig (1972), p. 220.
13. Bauchau & Craig (2009), p. 915 (https://books.googl 33. Protter & Morrey (1970), p. 869.
e.com/books?id=GYRX8ZYVNYQC&pg=PA915). 34. Kreyszig (1972), pp. 241, 244.
14. Johnston (2021), p. 21 (https://books.google.com/boo 35. Schneider & Barker (2012).
ks?id=y24vEAAAQBAJ&pg=PA21). 36. Perlis (1991).
15. Oualline (2003), Ch. 5. 37. Anton (2010).
16. Pop & Furdui (2017). 38. Horn, Roger A.; Johnson, Charles R. (2012), Matrix
17. For example, for ⁠ ⁠, see Mello (2017), p. 48 (https://b Analysis (https://books.google.com/books?id=5I5AYe
ooks.google.com/books?id=RC4tDwAAQBAJ&pg=PA eh0JUC&pg=PA17) (2nd ed.), Cambridge University
48); for ⁠ ⁠, see Axler (1997), p. 50 (https://books.g Press, p. 17, ISBN 978-0-521-83940-2.
oogle.com/books?id=ovIYVIlithQC&pg=PA50). 39. Brown (1991), I.2.21 and 22.
40. Gbur (2011), p. 95.
41. Grossman (1994), pp. 494–495.
42. Bierens (2004), p. 263 (https://books.google.com/boo 86. Brown (1991), Corollary III.4.10.
ks?id=ZrBaRPVRLRoC&pg=PA263). 87. Bernstein (2009), p. 265.
43. Johnston (2021), p. 56 (https://books.google.com/boo 88. Householder (1975), Ch. 7.
ks?id=y24vEAAAQBAJ&pg=PA56).
89. Bau III & Trefethen (1997).
44. Greub (1975, p. 90). Note however that Greub
90. Golub & Van Loan (1996), Algorithm 1.3.1.
follows a transposed convention of representing a
transformation by multiplying a row vector by a 91. Vassilevska Williams et al. (2024).
matrix, rather than multiplying a matrix by a column 92. Misra, Bhattacharya & Ghosh (2022).
vector, leading to the reversed order for the two 93. Golub & Van Loan (1996), Chapters 9 and 10, esp.
matrices in the product that represents a section 10.2.
composition. 94. Golub & Van Loan (1996), Chapter 2.3.
45. Lang (1986), §VI.1. 95. Press et al. (1992).
46. Brown (1991), Definition II.3.3. 96. Stoer & Bulirsch (2002), Section 4.1.
47. Greub (1975), Section III.1. 97. Gbur (2011), pp. 146–153.
48. Brown (1991), Theorem II.3.22. 98. Horn & Johnson (1985), Theorem 2.5.4.
49. Anton (2010), p. 27 (https://books.google.com/book 99. Horn & Johnson (1985), Ch. 3.1, 3.2.
s?id=YmcQJoFyZ5gC&pg=PA27).
100. Arnold (1992), Sections 14.5, 7, 8.
50. Anton (2010), p. 68 (https://books.google.com/book
101. Bronson (1989), Ch. 15.
s?id=YmcQJoFyZ5gC&pg=PA68).
102. Coburn (1955), Ch. V.
51. Gbur (2011), p. 91.
52. Boas (2005), p. 118. 103. Lang (2002), p. 643, XVII.1.
104. Lang (2002), Proposition XIII.4.16.
53. Horn & Johnson (1985), §0.9.1 Diagonal matrices.
105. Reichl (2004), Section L.2.
54. Boas (2005), p. 138.
106. Greub (1975), Section III.3.
55. Horn & Johnson (1985), Theorem 2.5.6.
107. Greub (1975), Section III.3.13.
56. Conway (1990), pp. 262–263.
57. Brown (1991), Definition I.2.28. 108. Perrone (2024), pp. 99–100.
58. Brown (1991), Definition I.5.13. 109. Horn & Johnson (1985), p. 69.
110. Additionally, the group must be closed in the general
59. Anton (2010), p. 62 (https://books.google.com/book
linear group.
s?id=YmcQJoFyZ5gC&pg=PA62).
111. Baker (2003), Def. 1.30.
60. Gbur (2011), pp. 99–100.
112. Baker (2003), Theorem 1.2.
61. Horn & Johnson (1985), Chapter 7.
113. Artin (1991), Chapter 4.5.
62. Anton (2010), Thm. 7.3.2.
63. Horn & Johnson (1985), Theorem 7.2.1. 114. Rowen (2008), p. 198, Example 19.2.
115. See any reference in representation theory or group
64. Boas (2005), p. 150.
representation.
65. Horn & Johnson (1985), p. 169, Example 4.0.6.
116. See the item "Matrix" in Itô 1987.
66. Lang (1986), Appendix. Complex numbers.
117. "Not much of matrix theory carries over to infinite-
67. Horn & Johnson (1985), pp. 66–67. dimensional spaces, and what does is not so useful,
68. Gbur (2011), pp. 102–103. but it sometimes helps." Halmos 1982, p. 23,
69. Boas (2005), pp. 127, 153–154. Chapter 5.
70. Boas (2005), p. 141. 118. "Empty Matrix: A matrix is empty if either its row or
71. Horn & Johnson (1985), pp. 40, 42. column dimension is zero", Glossary (https://omatrix.
com/manual/glossary.htm) Archived (https://web.archi
72. Lang (1986), p. 281.
ve.org/web/20090429015728/http://www.omatrix.co
73. Tang (2006), p. 226. m/manual/glossary.htm) 2009-04-29 at the Wayback
74. Bernstein (2009), p. 94. Machine, O-Matrix v6 User Guide
75. Horn & Johnson (1985), §0.5 Nonsingularity. 119. Fudenberg & Tirole (1983), Section 1.1.1.
76. Margalit & Rabinoff (2019). 120. Manning & Schütze (1999), Section 15.3.4.
77. "Matrix | mathematics" (https://britannica.com/scienc 121. Ward (1997), Ch. 2.8.
e/matrix-mathematics), Encyclopedia Britannica, 122. Stinson (2005), Ch. 1.1.5 and 1.2.4.
retrieved 2020-08-19
123. ISRD Group (2005), Ch. 7.
78. Brown (1991), Definition III.2.1. 124. Bhaya & Kaszkurewicz (2006), p. 230 (https://books.
79. Brown (1991), Theorem III.2.12. google.com/books?id=3X7S_965jywC&pg=PA230).
80. Brown (1991), Corollary III.2.16. 125. Jensen (1999), p. 65–69 (https://archive.org/details/in
81. Mirsky (1990), Theorem 1.4.1. troductiontoco0000jens/page/65/mode/2up?q=matri
82. Brown (1991), Theorem III.3.18. x).
83. Eigen means "own" in German and in Dutch. See 126. Godsil & Royle (2004), Ch. 8.1.
Wiktionary (https://en.wiktionary.org/wiki/eigen). 127. Punnen & Gutin (2002).
84. Brown (1991), Definition III.4.1. 128. Zhang, Yu & Hou (2006), p. 7 (http://books.google.co
85. Brown (1991), Definition III.4.9. m/books?id=0xhra9vKCnUC&pg=PA7).
129. Scott & Tůma (2023). 159. Kosinski (2001).
130. Lang (1987), Ch. XVI.6. 160. "matrix", Merriam-Webster dictionary (https://merriam
131. Nocedal & Wright (2006), Ch. 16. -webster.com/dictionary/matrix), Merriam-Webster,
retrieved April 20, 2009
132. Lang (1987), Ch. XVI.1.
161. Although many sources state that J. J. Sylvester
133. Lang 1987, Ch. XVI.5. For a more advanced, and
coined the mathematical term "matrix" in 1848,
more general statement see Lang 1969, Ch. VI.2.
Sylvester published nothing in 1848. (For proof that
134. Gilbarg & Trudinger (2001). Sylvester published nothing in 1848, see Sylvester
135. Šolin 2005, Ch. 2.5. See also stiffness method. (1904, vol. 1 (https://books.google.com/books?id=r-k
136. Latouche & Ramaswami (1999). ZAQAAIAAJ&pg=PR6)). His earliest use of the term
137. Mehata & Srinivasan (1978), Ch. 2.8. "matrix" occurs in 1850 in J. J. Sylvester (1850)
"Additions to the articles in the September number of
138. Healy, Michael (1986), Matrices for Statistics, Oxford
this journal, "On a new class of theorems," and on
University Press, ISBN 978-0-19-850702-4
Pascal's theorem," The London, Edinburgh, and
139. Krzanowski (1988), p. 60, Ch. 2.2. Dublin Philosophical Magazine and Journal of
140. Krzanowski (1988), Ch. 4.1. Science, 37: 363-370. From page 369 (https://books.
141. Conrey 2007 google.com/books?id=CBhDAQAAIAAJ&pg=PA369):
142. Zabrodin, Brezin & Kazakov et al. 2006 "For this purpose, we must commence, not with a
square, but with an oblong arrangement of terms
143. Schiff (1968), Ch. 6. consisting, suppose, of m lines and n columns. This
144. Peres (1993), p. 20. does not in itself represent a determinant, but is, as it
145. Bohm (2001), sections I.8, II.4, and II.8. were, a Matrix out of which we may form various
146. Peres (1993), p. 73. systems of determinants ... "
147. Itzykson & Zuber (1980), Ch. 2. 162. Sylvester (1904), p. 247, Paper 37 (https://books.goo
gle.com/books?id=5GQPlxWrDiEC&pg=PA247).
148. Burgess & Moore (2007), section 1.6.3. (SU(3)),
section 2.4.3.2. (Kobayashi–Maskawa matrix). 163. Cayley (1858).
149. Weinberg (1995), Ch. 3. 164. Dieudonné (1978), Vol. 1, Ch. III, p. 96.
150. Wherrett (1987), part II. 165. Knobloch (1994).
151. Riley, Hobson & Bence (1997), 7.17. 166. Hawkins (1975).
152. Guenther (1990), Ch. 5. 167. Kronecker 1897
153. Suresh Kumar (2009), pp. 747–749. 168. Weierstrass 1915, pp. 271–286
154. Shen, Crossley & Lun 1999 cited by Bretscher 2005, 169. & Miller (1930).
p. 1 170. Bôcher (2004).
155. Dossey (2002), pp. 564–565. 171. Hawkins (1972).
156. Needham, Joseph; Wang Ling (1959), Science and 172. van der Waerden (2007), pp. 28–40.
Civilisation in China (https://books.google.com/book 173. Peres (1993), pp. 79, 106–107.
s?id=jfQ9E0u4pLAC&pg=PA117), vol. III, Cambridge: 174. Whitehead, Alfred North; and Russell, Bertrand
Cambridge University Press, p. 117, ISBN 978-0-521- (1913) Principia Mathematica to *56, Cambridge at
05801-8 the University Press, Cambridge UK (republished
157. Dossey (2002), p. 564. 1962) cf page 162ff.
158. Cramer (1750). 175. Tarski (1941), p. 40 (https://books.google.com/book
s?id=5MeNCgAAQBAJ&pg=PA40).

References

Mathematical references
Anton, Howard (2010), Elementary Linear Algebra (https://books.google.com/books?id=YmcQJoFyZ5gC&pg=PA
414) (10th ed.), John Wiley & Sons, p. 414, ISBN 978-0-470-45821-1
Arnold, Vladimir I. (1992), Ordinary differential equations, translated by Cooke, Roger, Berlin, DE; New York, NY:
Springer-Verlag, ISBN 978-3-540-54813-3
Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1
Axler, Sheldon (1997), Linear Algebra Done Right, Undergraduate Texts in Mathematics (2nd ed.), Springer,
ISBN 9780387982595
Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory (https://archive.org/details/matrixgr
oupsintr0000bake), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-85233-470-3
Bau III, David; Trefethen, Lloyd N. (1997), Numerical linear algebra, Philadelphia, PA: Society for Industrial and
Applied Mathematics, ISBN 978-0-89871-361-9
Bernstein, Dennis S. (2009), Matrix mathematics: theory, facts, and formulas (2nd ed.), Princeton, N.J: Princeton
University Press, ISBN 978-1-4008-3334-4
Bhaya, Amit; Kaszkurewicz, Eugenius (2006), Control Perspectives on Numerical Algorithms and Matrix
Problems, Advances in Design and Control, vol. 10, SIAM, ISBN 9780898716023
Bierens, Herman J. (2004), Introduction to the Mathematical and Statistical Foundations of Econometrics,
Cambridge University Press, ISBN 9780521542241
Bretscher, Otto (2005), Linear Algebra with Applications (3rd ed.), Prentice Hall
Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 (https://l
ccn.loc.gov/70097490)
Bronson, Richard (1989), Schaum's outline of theory and problems of matrix operations, New York: McGraw–Hill,
ISBN 978-0-07-007978-6
Brown, William C. (1991), Matrices and vector spaces (https://archive.org/details/matricesvectorsp0000brow),
New York, NY: Marcel Dekker, ISBN 978-0-8247-8419-5
Coburn, Nathaniel (1955), Vector and tensor analysis, New York, NY: Macmillan, OCLC 1029828 (https://search.
worldcat.org/oclc/1029828)
Conrey, J. Brian (2007), Ranks of elliptic curves and random matrix theory, Cambridge University Press,
ISBN 978-0-521-69964-8
Dossey, John A. (2002), Discrete Mathematics (4th ed.), Addison Wesley, ISBN 9780321079121
Conway, John B. (1990), A Course in Functional Analysis, Graduate Texts in Mathematics, vol. 96 (2nd ed.),
Springer, ISBN 0-387-97245-5
Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-
01984-1
Fudenberg, Drew; Tirole, Jean (1983), Game Theory, MIT Press
Gentle, James E. (1998), Numerical Linear Algebra for Applications in Statistics, Springer, ISBN 9780387985428
Gilbarg, David; Trudinger, Neil S. (2001), Elliptic partial differential equations of second order (2nd ed.), Berlin,
DE; New York, NY: Springer-Verlag, ISBN 978-3-540-41160-4
Godsil, Chris; Royle, Gordon (2004), Algebraic Graph Theory, Graduate Texts in Mathematics, vol. 207, Berlin,
DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95220-8
Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-8018-
5414-9
Greub, Werner Hildbert (1975), Linear algebra, Graduate Texts in Mathematics, Berlin, DE; New York, NY:
Springer-Verlag, ISBN 978-0-387-90110-7
Halmos, Paul Richard (1982), A Hilbert space problem book, Graduate Texts in Mathematics, vol. 19 (2nd ed.),
Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90685-0, MR 0675952 (https://mathscinet.ams.org/
mathscinet-getitem?mr=0675952)
Grossman, Stanley I. (1994), Elementary Linear Algebra (5th ed.), Saunders College Pub.,
ISBN 9780030973543
Hamilton, A. G. (1987), A First Course in Linear Algebra: With Concurrent Examples, Cambridge University
Press, ISBN 9780521310413
Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-
38632-6
Householder, Alston S. (1975), The theory of matrices in numerical analysis, New York, NY: Dover Publications,
MR 0378371 (https://mathscinet.ams.org/mathscinet-getitem?mr=0378371)
ISRD Group (2005), Computer Graphics, Tata McGraw–Hill, ISBN 978-0-07-059376-3
Itô, Kiyosi, ed. (1987), Encyclopedic dictionary of mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN 978-0-262-
09026-1, MR 0901762 (https://mathscinet.ams.org/mathscinet-getitem?mr=0901762)
Johnston, Nathaniel (2021), Introduction to Linear and Matrix Algebra, Springer Nature, ISBN 9783030528119
Kreyszig, Erwin (1972), Advanced Engineering Mathematics (https://archive.org/details/advancedengineer00kre
y) (3rd ed.), New York: Wiley, ISBN 0-471-50728-8.
Krzanowski, Wojtek J. (1988), Principles of multivariate analysis, Oxford Statistical Science Series, vol. 3, The
Clarendon Press Oxford University Press, ISBN 978-0-19-852211-9, MR 0969370 (https://mathscinet.ams.org/m
athscinet-getitem?mr=0969370)
Lancaster, Peter; Tismenetsky, Miron (1985), The Theory of Matrices: With Applications (2nd ed.), Elsevier,
ISBN 9780080519081
Lang, Serge (1969), Analysis II, Addison-Wesley
Lang, Serge (1986), Introduction to Linear Algebra (2nd ed.), Springer, ISBN 9781461210702
Lang, Serge (1987), Calculus of several variables (https://archive.org/details/calculusofsevera0000lang)
(3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96405-8
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-
Verlag, ISBN 978-0-387-95385-4, MR 1878556 (https://mathscinet.ams.org/mathscinet-getitem?mr=1878556)
Latouche, Guy; Ramaswami, Vaidyanathan (1999), Introduction to matrix analytic methods in stochastic
modeling (1st ed.), Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-425-8
Manning, Christopher D.; Schütze, Hinrich (1999), Foundations of statistical natural language processing, MIT
Press, ISBN 978-0-262-13360-9
Margalit, Dan; Rabinoff, Joseph (2019), "Determinants and Volumes" (https://textbooks.math.gatech.edu/ila/deter
minants-volumes.html), Interactive Linear Algebra, Georgia Institute of Technology, retrieved 2025-05-10
Maxwell, E. A. (1969), Algebraic Structure and Matrices, Being Part II of Advanced Algebra, Cambridge
University Press
Mehata, K. M.; Srinivasan, S. K. (1978), Stochastic processes, New York, NY: McGraw–Hill, ISBN 978-0-07-
096612-3
Mello, David C. (2017), Invitation to Linear Algebra, Textbooks in Mathematics, CRC Press,
ISBN 9781498779586
Mirsky, Leonid (1990), An Introduction to Linear Algebra (https://books.google.com/books?id=ULMmheb26ZcC&q
=linear+algebra+determinant&pg=PA1), Courier Dover Publications, ISBN 978-0-486-66434-7
Misra, Chandan; Bhattacharya, Sourangshu; Ghosh, Soumya K. (June 2022), "Stark: Fast and scalable
Strassen's matrix multiplication using Apache Spark", IEEE Transactions on Big Data, 8 (3): 699–710,
arXiv:1811.07325 (https://arxiv.org/abs/1811.07325), doi:10.1109/tbdata.2020.2977326 (https://doi.org/10.1109%
2Ftbdata.2020.2977326)
Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76-91646 (https://lcc
n.loc.gov/76-91646)
Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, DE; New York, NY:
Springer-Verlag, p. 449, ISBN 978-0-387-30303-1
Oualline, Steve (2003), Practical C++ programming, O'Reilly, ISBN 978-0-596-00419-4
Perrone, Paolo (2024), Starting Category Theory (https://www.worldscientific.com/worldscibooks/10.1142/13670),
World Scientific, doi:10.1142/9789811286018_0005 (https://doi.org/10.1142%2F9789811286018_0005),
ISBN 978-981-12-8600-1
Perlis, Sam (1991), Theory of Matrices (https://books.google.com/books?id=5_sxtcnvLhoC&pg=PA103), Dover
books on advanced mathematics, Courier Dover Corporation, p. 103, ISBN 978-0-486-66810-9
Pop; Furdui (2017), Square Matrices of Order 2, Springer International Publishing, ISBN 978-3-319-54938-5
Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), "LU Decomposition and Its
Applications" (https://web.archive.org/web/20090906113144/http://www.mpi-hd.mpg.de/astrophysik/HEA/internal/
Numerical_Recipes/f2-3.pdf) (PDF), Numerical Recipes in FORTRAN: The Art of Scientific Computing (2nd ed.),
Cambridge University Press, pp. 34–42, archived from the original on 2009-09-06
Protter, Murray H.; Morrey, Charles B. Jr. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading:
Addison-Wesley, LCCN 76087042 (https://lccn.loc.gov/76087042)
Punnen, Abraham P.; Gutin, Gregory (2002), The traveling salesman problem and its variations, Boston, MA:
Kluwer Academic Publishers, ISBN 978-1-4020-0664-7
Ramachandra Rao, A.; Bhimasankaram, P. (2000), Linear Algebra, Texts and Readings in Mathematics, vol. 19
(2nd ed.), Springer, ISBN 9789386279019
Reyes, Manuel (2025), "A tour of noncommutative spectral theories", Notices of the American Mathematical
Society, 72 (2): 145–153, arXiv:2409.08421 (https://arxiv.org/abs/2409.08421), doi:10.1090/noti3100 (https://doi.
org/10.1090%2Fnoti3100), MR 4854325 (https://mathscinet.ams.org/mathscinet-getitem?mr=4854325)
Rowen, Louis Halle (2008), Graduate Algebra: noncommutative view, Providence, RI: American Mathematical
Society, ISBN 978-0-8218-4153-2
Schneider, Hans; Barker, George Phillip (2012), Matrices and Linear Algebra (https://books.google.com/books?id
=9vjBAgAAQBAJ&pg=PA251), Dover Books on Mathematics, Courier Dover Corporation, p. 251, ISBN 978-0-
486-13930-2
Scott, J.; Tůma, M. (2023), "Sparse Matrices and Their Graphs", Algorithms for Sparse Linear Systems (https://d
oi.org/10.1007/978-3-031-25820-6_2), Nečas Center Series, Cham: Birkhäuser, pp. 19–30, doi:10.1007/978-3-
031-25820-6_2 (https://doi.org/10.1007%2F978-3-031-25820-6_2), ISBN 978-3-031-25819-0
Šolin, Pavel (2005), Partial Differential Equations and the Finite Element Method, Wiley-Interscience, ISBN 978-
0-471-76409-0
Stinson, Douglas R. (2005), Cryptography, Discrete Mathematics and its Applications, Chapman & Hall/CRC,
ISBN 978-1-58488-508-5
Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, DE; New York, NY:
Springer-Verlag, ISBN 978-0-387-95452-3
Suresh Kumar, K. S. (2009), Electric Circuits and Networks, Dorling Kindersley, ISBN 978-81-317-1390-7
Tang, K. T. (2006), Mathematical Methods for Engineers and Scientists 1: Complex Analysis, Determinants and
Matrices, Springer, ISBN 978-3-540-30273-5
Vassilevska Williams, Virginia; Xu, Yinzhan; Xu, Zixuan; Zhou, Renfei (2024), "New bounds for matrix
multiplication: from alpha to omega", Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), pp. 3792–3835, arXiv:2307.07970 (https://arxiv.org/abs/2307.07970),
doi:10.1137/1.9781611977912.134 (https://doi.org/10.1137%2F1.9781611977912.134), ISBN 978-1-61197-791-2
Ward, J. P. (1997), Quaternions and Cayley numbers (https://archive.org/details/quaternionscayle0000ward),
Mathematics and its Applications, vol. 403, Dordrecht, NL: Kluwer Academic Publishers Group, doi:10.1007/978-
94-011-5768-1 (https://doi.org/10.1007%2F978-94-011-5768-1), ISBN 978-0-7923-4513-8, MR 1458894 (https://
mathscinet.ams.org/mathscinet-getitem?mr=1458894)
Watkins, David S. (2002), Fundamentals of Matrix Computations (https://books.google.com/books?id=xi5omWiQ-
3kC), John Wiley & Sons, ISBN 978-0-471-46167-8
Whitelaw, T. A. (1991), Introduction to Linear Algebra (https://books.google.com/books?id=6M_kDzA7-qIC)
(2nd ed.), CRC Press, p. 29, ISBN 9780751401592
Zhang, Yanchun; Yu, Jeffrey Xu; Hou, Jingyu (2006), Web Communities: Analysis and Construction, Springer,
ISBN 978-3-540-27737-8

Physics references
Bauchau, O. A.; Craig, J. I. (2009), Structural Analysis: With Applications to Aerospace Structures, Solid
Mechanics and Its Applications, vol. 163, Springer, ISBN 9789048125166
Boas, Mary L. (2005), Mathematical Methods in the Physical Sciences (3rd ed.), John Wiley & Sons, ISBN 978-0-
471-19826-0
Bohm, Arno (2001), Quantum Mechanics: Foundations and Applications, Springer, ISBN 0-387-95330-2
Burgess, Cliff; Moore, Guy (2007), The Standard Model. A Primer, Cambridge University Press,
Bibcode:2007smp..book.....B (https://ui.adsabs.harvard.edu/abs/2007smp..book.....B), ISBN 978-0-521-86036-9
Gbur, Greg (2011), Mathematical Methods in Optical Physics and Engineering, Cambridge University Press,
Bibcode:2011mmop.book.....G (https://ui.adsabs.harvard.edu/abs/2011mmop.book.....G), ISBN 978-0-521-
51610-5
Guenther, Robert D. (1990), Modern Optics, John Wiley, ISBN 0-471-60538-7
Itzykson, Claude; Zuber, Jean-Bernard (1980), Quantum Field Theory (https://archive.org/details/quantumfieldthe
o0000itzy), McGraw–Hill, ISBN 0-07-032071-3
Jensen, Frank (1999), Introduction to Computational Chemistry (https://archive.org/details/introductiontoco0000je
ns), John Wiley & Sons, ISBN 0-471-98085-4
Peres, Asher (1993), Quantum Theory: Concepts and Methods, Kluwer, ISBN 978-0-7923-3632-7
Reichl, Linda E. (2004), The transition to chaos: conservative classical systems and quantum manifestations,
Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-98788-0
Riley, Kenneth F.; Hobson, Michael P.; Bence, Stephen J. (1997), Mathematical methods for physics and
engineering, Cambridge University Press, ISBN 0-521-55506-X
Schiff, Leonard I. (1968), Quantum Mechanics (3rd ed.), McGraw–Hill
Weinberg, Steven (1995), The Quantum Theory of Fields. Volume I: Foundations (https://archive.org/details/quan
tumtheoryoff00stev), Cambridge University Press, ISBN 0-521-55001-7
Wherrett, Brian S. (1987), Group Theory for Atoms, Molecules and Solids, Prentice–Hall International, ISBN 0-
13-365461-3
Zabrodin, Anton; Brezin, Édouard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006), Applications of
Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry), Berlin, DE; New
York, NY: Springer-Verlag, ISBN 978-1-4020-4530-1

Historical references
Bôcher, Maxime (2004), Introduction to higher algebra, New York, NY: Dover Publications, ISBN 978-0-486-
49570-5, reprint of the 1907 original edition
Cayley, Arthur (December 1858), "A memoir on the theory of matrices", Philosophical Transactions of the Royal
Society of London, 148: 17–37, doi:10.1098/rstl.1858.0002 (https://doi.org/10.1098%2Frstl.1858.0002),
JSTOR 108649 (https://www.jstor.org/stable/108649); reprinted in The collected mathematical papers of Arthur
Cayley, vol. II, Cambridge University Press, 1889, pp. 475–496 (https://archive.org/details/collectedmathema02c
ayluoft/page/474).
Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley (https://quod.lib.umich.edu/cgi/t/text/p
ageviewer-idx?c=umhistmath;cc=umhistmath;rgn=full%20text;idno=ABS3153.0001.001;didno=ABS3153.0001.00
1;view=image;seq=00000140), vol. I (1841–1853), Cambridge University Press, pp. 123–126
Cramer, Gabriel (1750), Introduction à l'Analyse des lignes Courbes algébriques (https://www.europeana.eu/resol
ve/record/03486/E71FE3799CEC1F8E2B76962513829D2E36B63015) (in French), Geneva: Europeana,
pp. 656–659, retrieved 2012-05-18
Dieudonné, Jean, ed. (1978), Abrégé d'histoire des mathématiques 1700-1900, Paris, FR: Hermann
Hawkins, Thomas (1972), "Hypercomplex numbers, Lie groups, and the creation of group representation theory",
Archive for History of Exact Sciences, 8 (4): 243–287, doi:10.1007/bf00328434 (https://doi.org/10.1007%2Fbf003
28434)
Hawkins, Thomas (1975), "Cauchy and the spectral theory of matrices", Historia Mathematica, 2: 1–29,
doi:10.1016/0315-0860(75)90032-4 (https://doi.org/10.1016%2F0315-0860%2875%2990032-4), ISSN 0315-
0860 (https://search.worldcat.org/issn/0315-0860), MR 0469635 (https://mathscinet.ams.org/mathscinet-getitem?
mr=0469635)
Knobloch, Eberhard (1994), "From Gauß to Weierstraß: determinant theory and its historical evaluations", in
Sasaki, Chikara; Sugiura, Mitsuo; Dauben, Joseph W. (eds.), The Intersection of History and Mathematics,
Science Networks: Historical Studies, vol. 15, Birkhäuser, pp. 51–66, doi:10.1007/978-3-0348-7521-9_5 (https://d
oi.org/10.1007%2F978-3-0348-7521-9_5), ISBN 3-7643-5029-6, MR 1308079 (https://mathscinet.ams.org/maths
cinet-getitem?mr=1308079)
Kosinski, A. A. (2001), "Cramer's Rule is due to Cramer", Mathematics Magazine, 74 (4): 310–312,
doi:10.2307/2691101 (https://doi.org/10.2307%2F2691101), JSTOR 2691101 (https://www.jstor.org/stable/269110
1)
Kronecker, Leopold (1897), Hensel, Kurt (ed.), Leopold Kronecker's Werke (https://quod.lib.umich.edu/cgi/t/text/te
xt-idx?c=umhistmath;idno=AAS8260.0002.001), Teubner
Miller, G. A. (May 1930), "On the history of determinants", The American Mathematical Monthly, 37 (5): 216–219,
doi:10.1080/00029890.1930.11987058 (https://doi.org/10.1080%2F00029890.1930.11987058), JSTOR 2299112
(https://www.jstor.org/stable/2299112)
Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999), Nine Chapters of the Mathematical Art,
Companion and Commentary (2nd ed.), Oxford University Press, ISBN 978-0-19-853936-0
Sylvester, J. J. (1904), Baker, H. F. (ed.), The Collected Mathematical Papers of James Joseph Sylvester,
Volume I (1837–1853) (https://archive.org/details/collectedmathem01sylvrich), Cambridge, England: Cambridge
University Press
van der Waerden, B. L., ed. (2007) [1968], Sources of Quantum Mechanics, Dover, ISBN 978-0-486-45892-2
Tarski, Alfred (1941), Introduction to Logic and the Methodology of Deductive Sciences, Oxford University Press,
MR 0003375 (https://mathscinet.ams.org/mathscinet-getitem?mr=0003375); reprint of 1946 corrected printing,
Dover Publications, 1995, ISBN 0-486-28462-X
Weierstrass, Karl (1915), Collected works (https://quod.lib.umich.edu/cgi/t/text/text-idx?c=umhistmath;idno=AAN8
481.0003.001), vol. 3

Further reading
"Matrix" (https://www.encyclopediaofmath.org/index.php?title=Matrix), Encyclopedia of Mathematics, EMS Press,
2001 [1994]
The Matrix Cookbook (https://math.uwaterloo.ca/~hwolkowi//matrixcookbook.pdf) (PDF), retrieved 24 March
2014
Brookes, Mike (2005), The Matrix Reference Manual (https://web.archive.org/web/20081216124433/http://www.e
e.ic.ac.uk/hp/staff/dmb/matrix/intro.html), London: Imperial College, archived from the original (https://ee.ic.ac.uk/
hp/staff/dmb/matrix/intro.html) on 16 December 2008, retrieved 10 Dec 2008

External links
MacTutor: Matrices and determinants (https://mathshistory.st-andrews.ac.uk/HistTopics/Matrices_and_determina
nts/)
Matrices and Linear Algebra on the Earliest Uses Pages (https://economics.soton.ac.uk/staff/aldrich/matrices.ht
m)
Earliest Uses of Symbols for Matrices and Vectors (https://jeff560.tripod.com/matrices.html)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&oldid=1292627551"

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy