Lecture 2 Slides
Lecture 2 Slides
Douglas Turatti
det@business.aau.dk
Aalborg University Business School
Denmark
Information about Supervision
Quantitative Methods
Douglas Turatti
1 Information
Introduction
Multivariate
▶ There is no lecture in a supervision session. Optimization
Matrix Multiplication
Douglas Turatti
Information
2 Introduction
▶ Mathematical optimization is the selection of a best
Local and Global
element (with regard to some criterion) from some set of Extreme Points
Multivariate
Optimization
▶ In the simplest case, an optimization problem consists of Optimization with Two
Variables
maximizing or minimizing a real function by choosing x
Constrained
from within an allowed domain and computing the value of Optimization
Vectors
Quantitative Methods
▶ Optimization problems are very common in Finance. For example: Douglas Turatti
Information
▶ Suppose two risky assets, A and B. Asset A has expected log-returns
3 Introduction
E(Ra ), and variance σA2 . Asset B has expected log-returns E(RB ), and
Local and Global
variance σB2 . The correlation between assets is ρAB . Extreme Points
Recipe
▶ Suppose you want to form a portfolio with both assets. Let wa be the Multivariate
weight on asset A, and (1 − wa ) the weight on asset B. The mean of the Optimization
Matrix Multiplication
where 0 ≤ wa ≤ 1.
▶ If you want to find the portfolio with the lowest risk, you have to minimize
(2) to find the optimal wa . Aalborg University Business
School
69 Denmark
Introduction to Optimization
Introduction : Notation and Terminology
Quantitative Methods
Douglas Turatti
maximum and minimum points. They are also usually referred as Local and Global
Extreme Points
extreme points.
Recipe
Multivariate
Definition Optimization
c ∈ D is a maximum point for f if f (x) ≤ f (c) for all x ∈ D. Optimization with Two
Variables
d ∈ D is a minimum point for f if f (x) ≥ f (d) for all x ∈ D. Constrained
Optimization
▶ We call f (c) the maximum value and f (d) the minimum value.
Matrices and Vectors
Vectors
▶ If the value of f at c(d) is strictly larger (smaller) than at any other
Matrix Multiplication
point in D, then c(d) is the global maximum (minimum) point.
Quantitative Methods
Douglas Turatti
▶ We use the following notation,
Information
5 Introduction
minx∈R f (x) (3)
Local and Global
Extreme Points
This mean the minimum value of f () when x is in the real Recipe
Constrained
Optimization
arg minx∈R f (x) (4)
Matrices and Vectors
Vectors
This means the argument x which minimizes the function
Matrix Multiplication
f (x).
Quantitative Methods
Information
▶ If f is a differentiable function that has a maximum or minimum at 6 Introduction
an interior point c of its domain, then the tangent line to its graph Local and Global
Extreme Points
must be horizontal (parallel to the x-axis) at that point. Why?
Recipe
Vectors
Theorem Matrix Multiplication
Suppose that a function f is differentiable in an interval I and
that c is an interior point of I. For x = c to be a maximum or
minimum point for f in I , a necessary condition is that it is a
stationary point for f i.e. that x = c satisfies the equation
f ′ (x) = 0. This is called the first-order condition. Aalborg University Business
School
69 Denmark
Introduction to Optimization
Necessary Condition
Quantitative Methods
Douglas Turatti
minimum at an interior point x in its domain. It is not a sufficient Local and Global
Extreme Points
condition. What it means not being sufficient?
Recipe
Multivariate
▶ Necessary vs sufficient conditions: A necessary condition is Optimization
one which must be present in order for another condition to Optimization with Two
Variables
occur, while a sufficient condition is one which produces the said
Constrained
condition. Optimization
Quantitative Methods
Douglas Turatti
Information
8 Introduction
Recipe
Multivariate
Optimization
Constrained
Optimization
Vectors
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
9 Introduction
▶ Figure 2 shows the graph of a function f defined in an interval Local and Global
Extreme Points
[a, b] having two stationary points, c and d. At c, there is a
Recipe
maximum; at d, there is a minimum.
Multivariate
Optimization
▶ In Fig 4, the function has three stationary points x0 , x1 , x2 . x0 is a Optimization with Two
local maximum, x1 is a local minimum, and x2 is neither a local Variables
Constrained
maximum nor a local minimum. It is an inflection point. Optimization
Quantitative Methods
Douglas Turatti
▶ A stationary point can only happen when f ′ (c) = 0, this is a
Information
necessary condition. 10 Introduction
▶ Note that if f ′ (x) ≥ 0 for x ≤ c and x ≥ c. Then, c cannot be a Matrices and Vectors
Vectors
local maximum. On the other hand, it can be an inflection point if
Matrix Multiplication
f ′ (c) = 0.
Theorem
If f ′ (x) ≥ 0 for x ≤ c and f ′ (x) ≤ 0 for x ≥ c, then x = c is a local
maximum point for f .
Aalborg University Business
School
69 Denmark
Introduction to Optimization
Necessary Condition: Local Minimum
Quantitative Methods
Douglas Turatti
Information
▶ Now, suppose f ′ (x) ≤ 0 for all x in I such that x ≤ c,whereas 11 Introduction
f ′ (x) ≥ 0 for all x in I such that x ≥ c. Then f (x) is decreasing to Local and Global
Extreme Points
the left of c and increasing to the right of c.
Recipe
Quantitative Methods
Douglas Turatti
▶ Recall that if f ′′ (x) < 0 for all x ∈ I, the function is concave in Information
I. 12 Introduction
Recipe
stationary point c.
Multivariate
Optimization
▶ If f ′ (c) = 0 at an interior point c of I , then f ′ (x) ≥ 0 to the left of Optimization with Two
c, while f ′ (x) ≤ 0 to the right of c. c is a local maximum Variables
Constrained
Optimization
▶ Now, if f ′′ (x) > 0 for all x ∈ I, the function is convex in I.
Matrices and Vectors
Vectors
′
▶ This implies that f (x) is increasing for all x ∈ I. Suppose a
Matrix Multiplication
stationary point c.
Quantitative Methods
Douglas Turatti
Information
13 Introduction
Recipe
Remark Multivariate
′′
if f (x) < 0 for all x ∈ I. The function is then concave in I, and Optimization
if f ′′ (x) > 0 for all x ∈ I. The function is then convex in I, and a Constrained
Optimization
stationary point c in I must be a local minimum. Matrices and Vectors
Vectors
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
▶ Suppose a portfolio with two risky assets A and B. E(Ra ) is 10%,
14 Introduction
and E(RB ) = 7%. The risk of asset A is σA2 = 16%, and the risk
Local and Global
of asset B is σB2 = 4%. The assets have a negative correlation Extreme Points
Multivariate
▶ Find the wA and wB which yields the lowest risk. Optimization
Vectors
simplify this equation to obtain, Matrix Multiplication
116 2 56
σP2 = wa − wa + 4 (6)
5 5
Quantitative Methods
Information
232 56 15 Introduction
wa − =0 (7)
5 5 Local and Global
Extreme Points
Recipe
Now, we can find the proportion on asset A which yields
Multivariate
the lowest risk portfolio Optimization
Matrix Multiplication
232
f ′′ (wa ) = >0 (9)
5
▶ The function is convex, so wa∗ is indeed the minimum
variance portfolio (MVP). Aalborg University Business
School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem
Quantitative Methods
Theorem Douglas Turatti
Multivariate
f (d) ≤ f (x) ≤ f (c) (10) Optimization
Constrained
▶ The extreme value theorem states that if a function f is Optimization
continuous on the closed interval [a, b] must attain a maximum Matrices and Vectors
Matrix Multiplication
▶ This theorem is important when dealing with restricted
optimization.
Quantitative Methods
Douglas Turatti
Information
17 Introduction
Recipe
Multivariate
Optimization
Constrained
Optimization
Vectors
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
▶ On a closed interval, the minimum/maximum may occur in
18 Introduction
the interior of the bounded interval, or on its boundaries. Local and Global
Extreme Points
Multivariate
Optimization
▶ Interior point: If it occurs at an interior point (inside the
Optimization with Two
interval I ) and if f () is differentiable, then the derivative Variables
Quantitative Methods
Douglas Turatti
Information
19 Introduction
Find the maximum and minimum values of a differentiable Local and Global
function f defined on a closed, bounded interval [a, b]: Extreme Points
Recipe
1. Find all points x ∈ [a, b] that satisfy the equation f ′ (x) = 0. Multivariate
Optimization
Quantitative Methods
Douglas Turatti
▶ Suppose a portfolio with two risky assets A and B. E(Ra )
Information
is 10%, and E(RB ) = 7%. The risk of asset A is σA2 = 16%, 20 Introduction
and the risk of asset B is σB2 = 4%. The assets have a Local and Global
negative correlation ρAB = −0.1. Using the extreme value Extreme Points
Multivariate
Optimization
▶ Find the wA and wB which yields the lowest risk. Optimization with Two
Variables
Matrix Multiplication
simplify to obtain,
108.wa2 48.wa
σP2 = − +4 (12)
5 5
Aalborg University Business
School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem: Application
Quantitative Methods
Information
′ 216.wa 48
σP2 = − = 0, (13) 21 Introduction
5 5 Local and Global
wa∗ = 48/216 = 0.22 (14) Extreme Points
Recipe
Constrained
▶ Check the lower bound for possible boundary minimum, i.e. Optimization
Douglas Turatti
Information
one over the whole domain of the function. Formally, 22 Local and Global
Extreme Points
Definition Recipe
Definition Vectors
f is said to have a local maximum point at the point x ∗ , if there exists Matrix Multiplication
Douglas Turatti
Information
Introduction
23 Local and Global
Extreme Points
Recipe
Multivariate
Optimization
Constrained
Optimization
Vectors
Matrix Multiplication
Douglas Turatti
Information
Introduction
Recipe
▶ In most cases, we are interested in global maximum (minimum). Multivariate
However, global maximums are not easy to find. Optimization
Vectors
Matrix Multiplication
Douglas Turatti
Remark Introduction
25 Local and Global
At a local extreme point in the interior of the domain of a differentiable Extreme Points
function, the derivative must be 0 Recipe
Multivariate
▶ In order to find possible local maxima and minima for a function f Optimization
defined in an interval I, we can again search among the following Optimization with Two
Variables
types of point: Constrained
1. Interior points in I where f ′ (x) = 0. Optimization
Matrix Multiplication
Quantitative Methods
▶ The first-derivative test is based on studying the sign of the first Douglas Turatti
f ′ (x) ≥ 0 throughout some interval (c, b) to the right of c, then x = c Matrices and Vectors
Vectors
is a local minimum point for f.
Matrix Multiplication
Theorem
If f ′ (x) > 0 both throughout some interval (a, c) to the left of c and
throughout some interval (c, b) to the right of c, then x = c is not a
local extreme point for f . The same conclusion holds if f ′ (x) < 0 on
both sides of c. Aalborg University Business
School
69 Denmark
Local and Global Extreme Points
The First-Derivative Test
Quantitative Methods
Douglas Turatti
Information
Introduction
Recipe
Multivariate
Optimization
Constrained
Optimization
Vectors
▶ The signs of the derivative in an interval around the critical point Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
Introduction
▶ The first derivative test requires knowing the sign of f ′ (x) at 28 Local and Global
Extreme Points
points both to the left and to the right of the given stationary point.
Recipe
Multivariate
▶ However, there is a more convinient way to determine whether a Optimization
critical point is a maximum or minimum or inflection point. Optimization with Two
Variables
Theorem Constrained
If f ′ (c) = 0 and f ′′ (c) < 0 then x = c is a strict local maximum point. Optimization
If f ′ (c) = 0 and f ′′ (c) > 0 then x = c is a strict local minimum point. Vectors
′ ′′ Matrix Multiplication
If f (c) = 0 and f (c) = 0 then then x = c is inconclusive.
Quantitative Methods
Douglas Turatti
Information
▶ Recall that we defined a twice differentiable function f (x)
Introduction
to be concave (convex) in an interval I if f ′′ (x) ≤ 0(≥) for 29 Local and Global
all x in I. Extreme Points
Recipe
▶ Points at which a function changes from being convex to Multivariate
being concave, or vice versa, are called inflection points. Optimization
Constrained
Definition Optimization
The point c is called an inflection point for the function f if there Matrices and Vectors
Vectors
exists an interval (a, b) about c such that:
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
Introduction
30 Local and Global
Extreme Points
Recipe
Multivariate
Optimization
Constrained
Optimization
Vectors
Matrix Multiplication
Douglas Turatti
▶ We can now summarize the procedure to find a maximum or
minimum of a single-variable function. Information
1. Find critical values, i.e those that satisfy f ′ (c) = 0 for c ∈ X . Introduction
Quantitative Methods
Douglas Turatti
Information
Introduction
optimal weight for several assets in the portfolio. Optimization with Two
Variables
Constrained
▶ However, optimization with several variables is a Optimization
matrix-based topic. This means that it is necessary a good Matrices and Vectors
Vectors
prior understanding of matrix algebra. Thus, here we focus
Matrix Multiplication
only on two-variable and 3-variable optimization.
Quantitative Methods
Douglas Turatti
▶ Suppose a portfolio with 3 assets, A, B and C. Let σi2 represents
Information
the variance of asset i. The total variance of a portfolio with
Introduction
returns given by E[P] = wa E(Ra ) + wb E[Rb ] + wC E[RC ] is,
Local and Global
Extreme Points
σP2 = wa σa2 + wb σb2 + wc σc2 + (16) Recipe
optimization as wc = 1 − wa − wb . Constrained
Optimization
Matrix Multiplication
above function.
Quantitative Methods
Douglas Turatti
Information
▶ Recall that for the function f (x) a critical point satisfies f ′ (x0 ) = 0.
Introduction
A critical point must have partial derivatives equal to 0 to both the Recipe
Quantitative Methods
Douglas Turatti
Information
Introduction
Recipe
Multivariate
Optimization
35 Optimization with Two
Variables
Constrained
Optimization
Vectors
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
▶ The function f is defined for all (x, y) by,
Introduction
Recipe
Find the maximum. Multivariate
Optimization
▶ Find partial derivatives and solve them for 0, 36 Optimization with Two
Variables
Constrained
∂f
= −4x − 2y + 36 = 0, (19) Optimization
∂x Matrices and Vectors
∂f Vectors
= −2x − 4y + 42 = 0 (20)
∂x Matrix Multiplication
▶ Solving this system we have that (x, y ) = (5, 8). This coordinate
is the only critical point in the system.
Quantitative Methods
Douglas Turatti
Information
▶ Recall that the second derivative tells us if the critical point is a Introduction
maximum or minimum. Local and Global
Extreme Points
Quantitative Methods
Douglas Turatti
Information
Theorem Introduction
Suppose that (x0 , y0 ) is an interior stationary point for the Local and Global
function f (x, y ). If Extreme Points
Recipe
∂2f ∂2f ∂2f ∂2f 2
∂ f 2
∂x 2
≤ 0, ∂y 2
≤ 0, and ∂x 2 ∂y 2
− ( ∂x∂y ) ≥0 Multivariate
Optimization
38 Optimization with Two
Then (x0 , y0 ) is a maximum point for f (x, y ). Variables
Constrained
Theorem Optimization
Quantitative Methods
Douglas Turatti
Information
Remark Introduction
If a twice differentiable function z = f (x, y ) satisfies the first Local and Global
inequalities,then it is called concave, whereas it is called Extreme Points
Multivariate
▶ Example: Let’s show that the previous example is a maximum. Optimization
39 Optimization with Two
Variables
▶ Solution: Constrained
Optimization
We have found that ∂f /∂x = −4x − 2y + 36 = 0 and Matrices and Vectors
′′ ′′
∂f /∂x = −2x − 4y + 42 = 0. Thus f11 = −4, f22 = −2 and Vectors
′′
f12 = −4, Matrix Multiplication
′′ ′′ ′′ ′′ ′′ 2
f11 < 0, f22 < 0, f11 f22 − (f12 ) = 16 − 4 = 12 ≥ 0 (21)
Quantitative Methods
Douglas Turatti
Information
Introduction
▶ There are several of examples in economics, finance, and Optimization with Two
Variables
statistics. 40 Constrained
Optimization
▶ Here, we will study only simple cases and methods. The topic is Matrices and Vectors
Quantitative Methods
Douglas Turatti
▶ The portfolio problem (with N assets) of finding the MVP is a
constrained optimization as we generally want 0 ≤ wa ≤ 1, . . . , Information
0 ≤ wN ≤ 1, and wa + · · · + wN = 1. Introduction
41 Constrained
will be subject to Optimization
Quantitative Methods
Douglas Turatti
▶ In portfolio theory, we are interested in the efficient frontier.
Information
Introduction
▶ The eficient frontier has the best possible expected level of return
Local and Global
for its level of risk, which is represented by the standard deviation Extreme Points
Multivariate
Optimization
▶ Note that diferently from the 2-asset case, larger portfolios can
Optimization with Two
have same expected return by combining assets using different Variables
proportions. 42 Constrained
Optimization
▶ In other words, several weight vectors can yield same expected Matrices and Vectors
Vectors
returns.
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
▶ For the 3-variable case the problem can be written as
Introduction
Recipe
+2wa wb σa σb ρAB + 2wa wc σa σc ρAC + 2wb wc σb σc(24)
ρBC
Multivariate
Optimization
Vectors
and wa ra + wb rb + wc rc = Rp Matrix Multiplication
Quantitative Methods
Douglas Turatti
▶ We start with the problem of maximizing (or minimizing) a
Information
function f (x, y ) of two variables, when x and y are restricted to Introduction
satisfy an equality constraint g(x, y ) = c, Local and Global
Extreme Points
Multivariate
▶ The basic method for this problem is the Lagrange Multiplier. Optimization
Vectors
L(x, y ) = f (x, y ) − λ g(x, y ) − c (26)
▶ Note that the term g(x, y ) − c applies the restriction.
Aalborg University Business
School
69 Denmark
Constrained Optimization
Lagrange Multiplier
Quantitative Methods
Douglas Turatti
Information
Introduction
▶ Note that the term g(x, y ) − c applies the restriction and it has Local and Global
Extreme Points
been multiplied by a parameter λ. Recipe
Multivariate
▶ Note that L(x, y ) = f (x, y ) for all (x, y ) that satisfy the constraint Optimization
Vectors
▶ The optimization is then carried out on the Lagrangian function. Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
Introduction
L(x, y )′1 = f1′ (x, y ) − λg1′ (x, y ) (27) Optimization with Two
Variables
Matrix Multiplication
Quantitative Methods
Douglas Turatti
Introduction
max (min) f (x, y ) subject to g(x, y ) = c (29) Local and Global
Extreme Points
Multivariate
Optimization
L(x, y ) = f (x, y ) − λ g(x, y ) − c (30)
Optimization with Two
Variables
▶ Step II: Differentiate L(x, y ) w.r.t. x and y, and equate the partial 47 Constrained
Optimization
derivatives to 0.
Matrices and Vectors
Matrix Multiplication
L(x, y )′2 = f2′ (x, y ) − λg2′ (x, y ) =0 (32)
▶ Step III: The two equations in (II), together with the constraint,
yield a system of three equations.
Quantitative Methods
Douglas Turatti
Information
Introduction
▶ Step III: The two equations in (II), together with the constraint,
Local and Global
yield a system of three equations. Extreme Points
Recipe
L(x, y )′1 = f1′ (x, y ) − λg1′ (x, y ) = 0 (33) Multivariate
Vectors
▶ Step IV : Solve these three equations simultaneously for the
Matrix Multiplication
three unknowns x, y , and λ. These triples (x, y , λ) are the
solution candidates, at least one of which solves the problem.
Quantitative Methods
Douglas Turatti
▶ Consider the problem,
Information
max (min) f (x, y ) subject to g(x, y ) = c (36) Introduction
Recipe
Quantitative Methods
Douglas Turatti
Recipe
▶ We write the Lagrangian, Multivariate
Optimization
Optimization with Two
L(x1 , . . . , xn ) = f (x1 , . . . , xn ) − λ g(x1 , . . . , xn ) − c (41) Variables
50 Constrained
Optimization
▶ The solution proceeds the same way by taking partial derivatives Matrices and Vectors
Matrix Multiplication
▶ We will not solve these problems as they require matrix algebra.
Quantitative Methods
▶ A matrix is simply a rectangular array of numbers considered as
Douglas Turatti
one mathematical object.
Information
▶ When there are m rows and n columns in the array, we have an Introduction
Recipe
▶ We usually denote a matrix with bold capital letters such as A, B,
Multivariate
and so on. Optimization
▶ aij denotes the element in the i-th row and the j-th column.
▶ We can write a matrix compactly as A = (aij )m×n . This means a Aalborg University Business
matrix of order m × n. 69
School
Denmark
Matrices and Matrix Operations
Vectors
Quantitative Methods
Douglas Turatti
▶ A matrix with either only one row or only one column is called a
vector. Information
Introduction
▶ A row n-dimensional vector has only one row Local and Global
Extreme Points
Recipe
a = a11 a12 . . . a1n (43)
Multivariate
Optimization
▶ A column n-dimensional vector has only one column Optimization with Two
Variables
b11
Constrained
Optimization
b21
b= . (44) 52 Matrices and Vectors
.. Vectors
Quantitative Methods
▶ Equality between matrices of the same order means that all Douglas Turatti
elements of the matrices are equal. A and B of size m × n are
Information
equal if and only if
Introduction
Recipe
▶ Let A and B the matrices of same order,m × n. The sum A + B is Multivariate
defined as Optimization
Matrix Multiplication
A − B = (aij )m×n − (bij )m×n = (aij − bij )m×n (47)
Quantitative Methods
Douglas Turatti
Information
▶ Consider the matrices Introduction
Local and Global
1 3 2 1 0 2 Extreme Points
A= B= (49)
5 −3 1 0 1 2 Recipe
Multivariate
▶ Calculate A + B, 3A, and A − B Optimization
Quantitative Methods
Douglas Turatti
Information
▶ (A + B) + C = A + (B + C) Introduction
Multivariate
▶ A+0=A Optimization
Vectors
▶ α(A + B) = αA + αB Matrix Multiplication
Douglas Turatti
Information
▶ Vectors can be row or column vectors. For example, a Introduction
Recipe
a = (a1 , a2 , a3 , . . . , an ) (53)
Multivariate
Optimization
▶ The elements ai i = 1, 2, . . . , n are called components (or Optimization with Two
Variables
coordinates) of the vector. Constrained
Optimization
Douglas Turatti
Introduction
operations introduced for matrices are equally valid for
Local and Global
vectors. Extreme Points
Recipe
▶ Two n-vectors a and b are equal if and only if all their Multivariate
Optimization
corresponding components are equal. Optimization with Two
Variables
Matrix Multiplication
▶ If a is an n-vector and t is a real number, we define ta as
the n-vector whose components are t times the
corresponding components in a.
Douglas Turatti
▶ If a and b are two n-vectors and t and s are real numbers, Information
▶ Affine transformation: If a and b are two n-vectors and t Matrices and Vectors
Quantitative Methods
Douglas Turatti
Information
Introduction
Definition Recipe
Constrained
Optimization
a′ b = a1 b1 + a2 b2 + · · · + an bn (55) Matrices and Vectors
59 Vectors
Matrix Multiplication
Quantitative Methods
◦ Douglas Turatti
▶ When the angle between two vectors a and b is 90 , they
are said to be orthogonal, and represented by a ⊥ b. Information
Introduction
▶ When 2 vectors are orthogonal their dot product is equal Local and Global
Extreme Points
to 0. Recipe
coefficient. For two demeaned data series, when their dot Constrained
Optimization
product is equal to 0, we say the series are uncorrelated Matrices and Vectors
(i.e. orthogonal). Recall the covariance coefficient 60 Vectors
between X and Y, Matrix Multiplication
Douglas Turatti
▶ The rules for adding or subtracting matrices are quite
Information
natural and simple. Introduction
Multivariate
multiplication. Optimization
Constrained
consider the following system of equations, Optimization
Douglas Turatti
▶ How can this system be converted into the matrix form.
Information
▶ This answer must recover the original system. This means Optimization with Two
Variables
which is a 2 × 1 vector.
Introduction
▶ We recover the first element of original system by Local and Global
Extreme Points
multiplying the first row of the matrix A by the vector Y .
Recipe
and adding them. Multivariate
Optimization
▶ The first element of the first row of matrix A multiplies the Optimization with Two
Variables
first element of the vector. The second element of the first Constrained
row of matrix A multiplies the second element of the Optimization
Vectors
63 Matrix Multiplication
a11 y1 + a12 y2 (62)
which is a scalar.
Quantitative Methods
Definition Douglas Turatti
Suppose that A = (aij )m×n and that B = (bij )n×p . Then the product Information
C = AB is the m × p matrix C = (cij )m × p, whose element in the i-th Introduction
row and the j-th column is the inner product Local and Global
Extreme Points
n
X Recipe
cij = air brj = ai1 b1j + ai2 b2j + · · · + ain bnj (63) Multivariate
r =1 Optimization
Quantitative Methods
Douglas Turatti
Introduction
If so, compute the matrix product AB. What about the
Local and Global
product BA? Extreme Points
Recipe
0 1 2 3 2 Multivariate
Optimization
A = 2 3 1 B = 1 0 (64) Optimization with Two
4 −1 6 −1 1 Variables
Constrained
Optimization
▶ Let’s start with the product AB.
Matrices and Vectors
For this product to be defined, we need that the number of Vectors
columns in matrix A is equal to the number of rows in 65 Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
▶ The matrix multiplication is calculated then as
Introduction
Local and Global
0 1 2 3 2 Extreme Points
2 3 1 1 0 = Recipe
4 −1 6 −1 1 Multivariate
Optimization
0 × 3 + 1 × 1 + 2 × −1 0×2+1×1+2×1 Optimization with Two
Variables
2 × 3 + 3 × 1 + 1 × −1 1×2+3×0+1×1 Constrained
Optimization
4 × 3 + −1 × 1 + 6 × −1 4 × 2 + −1 × 0 + 6 × 1
Matrices and Vectors
Douglas Turatti
Information
Introduction
Remark Recipe
Multivariate
Differently from scalars, matrix multiplication is not Optimization
commutative. In the previous example, AB was defined but BA Optimization with Two
Variables
was not. Even in cases in which AB and BA are both defined, Constrained
they are usually not equal. When we write AB, we say that we Optimization
Vectors
67 Matrix Multiplication
Quantitative Methods
Douglas Turatti
Information
commutative, i.e. AB ̸= BA. However, some rules from scalars Local and Global
Extreme Points
still apply: Recipe
Constrained
Optimization
3. Right distributive law: A(B + C) = AC + BC Matrices and Vectors
Vectors
4. (αA)B = A(αB) = α(AB), where α is a scalar. 68 Matrix Multiplication
5. Power matrix: An = A × A · · · × A
Quantitative Methods
Douglas Turatti
▶ The identity matrix is the matrix equivalent to the scalar 1.
Information
Introduction
▶ The identity matrix of order n, denoted by In , is the n × n matrix
Local and Global
having ones along the main diagonal and zeros elsewhere Extreme Points
Recipe
1 0 ... 0
Multivariate
0 1 . . . 0 Optimization
In = . . . (65)
. . ...
Optimization with Two
.. ..
Variables
Constrained
0 0 ... 1 Optimization