0% found this document useful (0 votes)
7 views70 pages

Lecture 2 Slides

The document discusses optimization techniques in quantitative finance. It introduces single variable and multivariate optimization, as well as constrained optimization. It also discusses matrices and vectors as applied to portfolio optimization problems with multiple assets.

Uploaded by

iub.foisal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views70 pages

Lecture 2 Slides

The document discusses optimization techniques in quantitative finance. It introduces single variable and multivariate optimization, as well as constrained optimization. It also discusses matrices and vectors as applied to portfolio optimization problems with multiple assets.

Uploaded by

iub.foisal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Quantitative Methods in Finance

Lecture 2 : Single-Variable Optimization


Introduction to Multivariate Optimization
Introdution to Constrained Optimization
Matrices and Vectors (part 1)

October 10, 2022

Douglas Turatti
det@business.aau.dk
Aalborg University Business School
Denmark
Information about Supervision
Quantitative Methods

Douglas Turatti

1 Information

Introduction

Local and Global


Extreme Points
▶ On Monday, we have our first supervision session.
Recipe

Multivariate
▶ There is no lecture in a supervision session. Optimization

Optimization with Two


Variables
▶ The supervision sessions will take place in my office (FIB2, 43).
Constrained
You have to book a time via Moodle. Optimization

Matrices and Vectors


▶ The supervision sessions will focus on the empirical study. Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Quantitative Methods

Douglas Turatti

Information

2 Introduction
▶ Mathematical optimization is the selection of a best
Local and Global
element (with regard to some criterion) from some set of Extreme Points

available alternatives. Recipe

Multivariate
Optimization
▶ In the simplest case, an optimization problem consists of Optimization with Two
Variables
maximizing or minimizing a real function by choosing x
Constrained
from within an allowed domain and computing the value of Optimization

the function. Matrices and Vectors

Vectors

▶ Optimization is one of the most important mathematical Matrix Multiplication

techniques for finance.

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Example in Finance

Quantitative Methods

▶ Optimization problems are very common in Finance. For example: Douglas Turatti

Information
▶ Suppose two risky assets, A and B. Asset A has expected log-returns
3 Introduction
E(Ra ), and variance σA2 . Asset B has expected log-returns E(RB ), and
Local and Global
variance σB2 . The correlation between assets is ρAB . Extreme Points

Recipe
▶ Suppose you want to form a portfolio with both assets. Let wa be the Multivariate
weight on asset A, and (1 − wa ) the weight on asset B. The mean of the Optimization

portfolio is then Optimization with Two


Variables
E[P] = wa E(Ra ) + (1 − wa )E(RB ) (1) Constrained
Optimization
It is easy to show that the variance of this portfolio is Matrices and Vectors

σP2 = wa2 σA2 + (1 − wa )2 σB2 + 2wa (1 − wa )σA σB ρAB (2) Vectors

Matrix Multiplication
where 0 ≤ wa ≤ 1.

▶ Note this is a quadratic equation and σ 2 = F (wa ).


P

▶ If you want to find the portfolio with the lowest risk, you have to minimize
(2) to find the optimal wa . Aalborg University Business
School
69 Denmark
Introduction to Optimization
Introduction : Notation and Terminology

Quantitative Methods

Douglas Turatti

▶ Those points in the domain of a function where it reaches its Information

largest and its smallest values are usually referred to as 4 Introduction

maximum and minimum points. They are also usually referred as Local and Global
Extreme Points
extreme points.
Recipe

Multivariate
Definition Optimization

c ∈ D is a maximum point for f if f (x) ≤ f (c) for all x ∈ D. Optimization with Two
Variables
d ∈ D is a minimum point for f if f (x) ≥ f (d) for all x ∈ D. Constrained
Optimization
▶ We call f (c) the maximum value and f (d) the minimum value.
Matrices and Vectors

Vectors
▶ If the value of f at c(d) is strictly larger (smaller) than at any other
Matrix Multiplication
point in D, then c(d) is the global maximum (minimum) point.

▶ We also use the terms optimal points and values, or extreme


points and values.

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Introduction : Notation and Terminology

Quantitative Methods

Douglas Turatti
▶ We use the following notation,
Information
5 Introduction
minx∈R f (x) (3)
Local and Global
Extreme Points
This mean the minimum value of f () when x is in the real Recipe

numbers set. Multivariate


Optimization

Optimization with Two


▶ Consider the following notation, Variables

Constrained
Optimization
arg minx∈R f (x) (4)
Matrices and Vectors

Vectors
This means the argument x which minimizes the function
Matrix Multiplication
f (x).

▶ f (x) is usually referred as objective function or output.

▶ x as referred as argument or arg max(min), or also input.


Aalborg University Business
School
69 Denmark
Introduction to Optimization
Necessary Condition

Quantitative Methods

▶ Necessary Condition: Douglas Turatti

Information
▶ If f is a differentiable function that has a maximum or minimum at 6 Introduction
an interior point c of its domain, then the tangent line to its graph Local and Global
Extreme Points
must be horizontal (parallel to the x-axis) at that point. Why?
Recipe

▶ If f ′ (x) > 0, the function is increasing, if f ′ (x) < 0 decreasing. Multivariate


Optimization
Hence, a maximum/minimum can only happen when f ′ (x) = 0. Optimization with Two
Variables

▶ Points c at which f (c) = 0 are called stationary (or critical) Constrained
Optimization
points for f (). Matrices and Vectors

Vectors
Theorem Matrix Multiplication
Suppose that a function f is differentiable in an interval I and
that c is an interior point of I. For x = c to be a maximum or
minimum point for f in I , a necessary condition is that it is a
stationary point for f i.e. that x = c satisfies the equation
f ′ (x) = 0. This is called the first-order condition. Aalborg University Business
School
69 Denmark
Introduction to Optimization
Necessary Condition

Quantitative Methods

Douglas Turatti

▶ The previous theorem states that f ′ (x) = 0 is a necessary Information

condition for a differentiable function f to have a maximum or 7 Introduction

minimum at an interior point x in its domain. It is not a sufficient Local and Global
Extreme Points
condition. What it means not being sufficient?
Recipe

Multivariate
▶ Necessary vs sufficient conditions: A necessary condition is Optimization
one which must be present in order for another condition to Optimization with Two
Variables
occur, while a sufficient condition is one which produces the said
Constrained
condition. Optimization

Matrices and Vectors


▶ This means that f ′ (c) = 0 does not guarantee that the c is an Vectors
extreme pont. Points not extreme can also have f ′ (x) = 0. Matrix Multiplication
However, all (interior) points have f ′ (x) = 0.

▶ Inflection point: a point with f ′ (x) = 0, which is not a


maximum/minimum.
Aalborg University Business
School
69 Denmark
Introduction to Optimization
Necessary Condition

Quantitative Methods

Douglas Turatti

Information
8 Introduction

Local and Global


Extreme Points

Recipe

Multivariate
Optimization

Optimization with Two


Variables

Constrained
Optimization

Matrices and Vectors

Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Necessary Condition

Quantitative Methods

Douglas Turatti

Information
9 Introduction
▶ Figure 2 shows the graph of a function f defined in an interval Local and Global
Extreme Points
[a, b] having two stationary points, c and d. At c, there is a
Recipe
maximum; at d, there is a minimum.
Multivariate
Optimization
▶ In Fig 4, the function has three stationary points x0 , x1 , x2 . x0 is a Optimization with Two
local maximum, x1 is a local minimum, and x2 is neither a local Variables

Constrained
maximum nor a local minimum. It is an inflection point. Optimization

Matrices and Vectors


▶ Note that x0 and x1 are only local maximum and minimum points.
Vectors
They are not the global maximum of the function in its full Matrix Multiplication
domain, but only in neighbouring point.

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Necessary Condition : Local Maximum

Quantitative Methods

Douglas Turatti
▶ A stationary point can only happen when f ′ (c) = 0, this is a
Information
necessary condition. 10 Introduction

Local and Global



▶ Suppose f (x) ≥ 0 for all x in I such that x ≤ c, whereas Extreme Points

f ′ (x) ≤ 0 for all x in I such that x ≥ c. Then f (x) is increasing to Recipe

the left of c and decreasing to the right of c. Multivariate


Optimization

Optimization with Two


▶ The point c can then be a local maximum, or a maximum in the Variables

(restricted) domain x ∈ I. Constrained


Optimization

▶ Note that if f ′ (x) ≥ 0 for x ≤ c and x ≥ c. Then, c cannot be a Matrices and Vectors

Vectors
local maximum. On the other hand, it can be an inflection point if
Matrix Multiplication
f ′ (c) = 0.
Theorem
If f ′ (x) ≥ 0 for x ≤ c and f ′ (x) ≤ 0 for x ≥ c, then x = c is a local
maximum point for f .
Aalborg University Business
School
69 Denmark
Introduction to Optimization
Necessary Condition: Local Minimum

Quantitative Methods

Douglas Turatti

Information
▶ Now, suppose f ′ (x) ≤ 0 for all x in I such that x ≤ c,whereas 11 Introduction
f ′ (x) ≥ 0 for all x in I such that x ≥ c. Then f (x) is decreasing to Local and Global
Extreme Points
the left of c and increasing to the right of c.
Recipe

▶ The point c is then a local minimum, or a minimum in the Multivariate


Optimization
(restricted) domain x ∈ I. Optimization with Two
Variables

▶ Note that if f (x) ≤ 0 for x ≤ c and x ≥ c. Then, c cannot be a Constrained
Optimization
local minimum. On the other hand, it can be an inflection point if Matrices and Vectors
f ′ (c) = 0. Vectors

Theorem Matrix Multiplication


′ ′
If f (x) ≤ 0 for x ≤ c and f (x) ≥ 0 for x ≥ c, then x = c is a local
minimum point for f .

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Necessary Condition : Convex and Concave functions

Quantitative Methods

Douglas Turatti

▶ Recall that if f ′′ (x) < 0 for all x ∈ I, the function is concave in Information

I. 12 Introduction

Local and Global


▶ This implies that f ′ (x) is decreasing for all x ∈ I. Suppose a Extreme Points

Recipe
stationary point c.
Multivariate
Optimization
▶ If f ′ (c) = 0 at an interior point c of I , then f ′ (x) ≥ 0 to the left of Optimization with Two
c, while f ′ (x) ≤ 0 to the right of c. c is a local maximum Variables

Constrained
Optimization
▶ Now, if f ′′ (x) > 0 for all x ∈ I, the function is convex in I.
Matrices and Vectors

Vectors

▶ This implies that f (x) is increasing for all x ∈ I. Suppose a
Matrix Multiplication
stationary point c.

▶ If f ′ (c) = 0 at an interior point c of I , then f ′ (x) ≤ 0 to the left of


c, while f ′ (x) ≥ 0 to the right of c. c is a local minimum.

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Necessary Condition : Convex and Concave functions

Quantitative Methods

Douglas Turatti

Information
13 Introduction

Local and Global


Extreme Points

Recipe
Remark Multivariate
′′
if f (x) < 0 for all x ∈ I. The function is then concave in I, and Optimization

a stationary point c in I must be a local maximum. Optimization with Two


Variables

if f ′′ (x) > 0 for all x ∈ I. The function is then convex in I, and a Constrained
Optimization
stationary point c in I must be a local minimum. Matrices and Vectors

Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Example : Portfolio

Quantitative Methods

Douglas Turatti

Information
▶ Suppose a portfolio with two risky assets A and B. E(Ra ) is 10%,
14 Introduction
and E(RB ) = 7%. The risk of asset A is σA2 = 16%, and the risk
Local and Global
of asset B is σB2 = 4%. The assets have a negative correlation Extreme Points

ρAB = −0.2. Recipe

Multivariate
▶ Find the wA and wB which yields the lowest risk. Optimization

Optimization with Two


Variables
▶ Solution: From slide (2) apply the risk of portfolio formula Constrained
Optimization

σP2 = wa2 .16 2


+ (1 − wa ) .4 + 2.wa .(1 − wa ).4.2(−0.2) (5) Matrices and Vectors

Vectors
simplify this equation to obtain, Matrix Multiplication

116 2 56
σP2 = wa − wa + 4 (6)
5 5

Aalborg University Business


School
69 Denmark
Introduction to Optimization
Example : Portfolio

Quantitative Methods

▶ The necessary condition for a minimum is f ′ (wa ) = 0, Douglas Turatti

Information

232 56 15 Introduction
wa − =0 (7)
5 5 Local and Global
Extreme Points

Recipe
Now, we can find the proportion on asset A which yields
Multivariate
the lowest risk portfolio Optimization

Optimization with Two

wa∗ = 56/232 = 0.241 wB∗ = 1 − wa∗ = 0.759


Variables
(8)
Constrained
Optimization
▶ To check if this is a minimum, we should check if the Matrices and Vectors

function f (wa ) is convex. So we find f ′′ (wa ): Vectors

Matrix Multiplication

232
f ′′ (wa ) = >0 (9)
5
▶ The function is convex, so wa∗ is indeed the minimum
variance portfolio (MVP). Aalborg University Business
School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem

Quantitative Methods
Theorem Douglas Turatti

Suppose that f is a continuous function over a closed and Information


bounded interval [a, b]. Then there exist a point d in [a, b] 16 Introduction
where f has a minimum, and a point c in [a, b] where f has a Local and Global
Extreme Points
maximum, so that,
Recipe

Multivariate
f (d) ≤ f (x) ≤ f (c) (10) Optimization

Optimization with Two


for all x ∈ [a, b] Variables

Constrained
▶ The extreme value theorem states that if a function f is Optimization

continuous on the closed interval [a, b] must attain a maximum Matrices and Vectors

and a minimum, each at least once. Vectors

Matrix Multiplication
▶ This theorem is important when dealing with restricted
optimization.

▶ Note that the previous portfolio application is usually on a


bounded interval wa ∈ [0, 1].
Aalborg University Business
School
▶ For example: If we restrict the previous portfolio example to 69 Denmark
Introduction to Optimization
The Extreme Value Theorem

Quantitative Methods

Douglas Turatti

Information

17 Introduction

Local and Global


Extreme Points

Recipe

Multivariate
Optimization

Optimization with Two


Variables

Constrained
Optimization

Matrices and Vectors

Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem. Interior vs Boundary Optimum

Quantitative Methods

Douglas Turatti

Information
▶ On a closed interval, the minimum/maximum may occur in
18 Introduction
the interior of the bounded interval, or on its boundaries. Local and Global
Extreme Points

▶ Three possibilities: Recipe

Multivariate
Optimization
▶ Interior point: If it occurs at an interior point (inside the
Optimization with Two
interval I ) and if f () is differentiable, then the derivative Variables

f ′ () is zero at that point. Constrained


Optimization

Matrices and Vectors


▶ Boundary: On the boundary the derivative can be defined Vectors

but it may not be 0. Matrix Multiplication

▶ Undefined derivative: There could still be interior points


with non-defined derivatives.

Aalborg University Business


School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem: Finding the Optimum

Quantitative Methods

Douglas Turatti

Information
19 Introduction
Find the maximum and minimum values of a differentiable Local and Global
function f defined on a closed, bounded interval [a, b]: Extreme Points

Recipe

1. Find all points x ∈ [a, b] that satisfy the equation f ′ (x) = 0. Multivariate
Optimization

Optimization with Two


2. Evaluate f at the end points a and b of the interval and Variables

also at all stationary points. Constrained


Optimization

Matrices and Vectors


3. The largest function value found in (2) is the maximum Vectors
value, and the smallest function value is the minimum Matrix Multiplication
value of f in [a, b].

Aalborg University Business


School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem: Application

Quantitative Methods

Douglas Turatti
▶ Suppose a portfolio with two risky assets A and B. E(Ra )
Information
is 10%, and E(RB ) = 7%. The risk of asset A is σA2 = 16%, 20 Introduction
and the risk of asset B is σB2 = 4%. The assets have a Local and Global
negative correlation ρAB = −0.1. Using the extreme value Extreme Points

theorem solve: Recipe

Multivariate
Optimization
▶ Find the wA and wB which yields the lowest risk. Optimization with Two
Variables

▶ Solution: Apply the portfolio formula, Constrained


Optimization

Matrices and Vectors


σP2 = wa2 .16 + (1 − wa )2 .4 + 2.wa .(1 − wa ).4.2(−0.1) (11) Vectors

Matrix Multiplication
simplify to obtain,

108.wa2 48.wa
σP2 = − +4 (12)
5 5
Aalborg University Business
School
69 Denmark
Introduction to Optimization
The Extreme Value Theorem: Application

Quantitative Methods

▶ Find interior stationary point: Douglas Turatti

Information
′ 216.wa 48
σP2 = − = 0, (13) 21 Introduction
5 5 Local and Global
wa∗ = 48/216 = 0.22 (14) Extreme Points

Recipe

▶ Evaluate σP2 (wa∗ ) Multivariate


Optimization

Optimization with Two


σP2 = 0.222 .16+(1−0.22)2 .4+2.0.22.(1−0.22).4.2(−0.1) = 2.93 Variables

Constrained
▶ Check the lower bound for possible boundary minimum, i.e. Optimization

Matrices and Vectors


wA = 0
Vectors

σP2 = 0.16 + (1)2 .4 + 2.0.(1 − 0).4.2(−0.1) = 4 (15) Matrix Multiplication

▶ The interior point is the minimum in this bounded interval.

▶ Note that even for positive correlation, it is still necessary to buy


both assets to obtain the lowest variance portfolio. Aalborg University Business
School
69 Denmark
Local and Global Extreme Points
Quantitative Methods

Douglas Turatti

Information

▶ Global maximum/minimum : The maximum (minimum) is the Introduction

one over the whole domain of the function. Formally, 22 Local and Global
Extreme Points

Definition Recipe

A function f defined on a domain X has a global (or absolute) Multivariate


Optimization
maximum point at x ∗ , if f (x ∗ ) > f (x) for all x in X. Optimization with Two
Variables
▶ Local maximum: The maximum (minimum) is the one on the Constrained
Optimization
vicinity of an interval I in X . Formally,
Matrices and Vectors

Definition Vectors

f is said to have a local maximum point at the point x ∗ , if there exists Matrix Multiplication

some ε > 0 and I = [x ∗ − ε, x ∗ + ε] such that f (x ∗ ) ≥ f (x) for all x in I


and I in the domain X .

Aalborg University Business


School
69 Denmark
Local and Global Extreme Points
Quantitative Methods

Douglas Turatti

Information

Introduction
23 Local and Global
Extreme Points

Recipe

Multivariate
Optimization

Optimization with Two


Variables

Constrained
Optimization

Matrices and Vectors

Vectors

Matrix Multiplication

▶ The global maximum is on the boundary, and the global minimum


is an interior point.
Aalborg University Business
▶ Local minimum and maximum are interior points. School
Denmark
69
Local and Global Extreme Points
Quantitative Methods

Douglas Turatti

Information

Introduction

24 Local and Global


Extreme Points

Recipe
▶ In most cases, we are interested in global maximum (minimum). Multivariate
However, global maximums are not easy to find. Optimization

Optimization with Two


Variables
▶ In general, we drop the idea of finding a global extreme
Constrained
point, and only focus on local points. Optimization

Matrices and Vectors

Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Local and Global Extreme Points
Quantitative Methods

Douglas Turatti

▶ The general result applies for local extreme points: Information

Remark Introduction
25 Local and Global
At a local extreme point in the interior of the domain of a differentiable Extreme Points
function, the derivative must be 0 Recipe

Multivariate
▶ In order to find possible local maxima and minima for a function f Optimization

defined in an interval I, we can again search among the following Optimization with Two
Variables
types of point: Constrained
1. Interior points in I where f ′ (x) = 0. Optimization

Matrices and Vectors

2. Boundary points of I. Vectors

Matrix Multiplication

▶ These are necessary conditions for a local maximum (minimum).

▶ Now, we present some tests to determine whether this point is


maximum or minimum.
Aalborg University Business
School
69 Denmark
Local and Global Extreme Points
The First-Derivative Test

Quantitative Methods

▶ The first-derivative test is based on studying the sign of the first Douglas Turatti

derivative around the stationary point. Suppose c is a stationary Information

point for y = f (x): Introduction


26 Local and Global
Theorem Extreme Points

If f (x) ≥ 0 throughout some interval (a, c) to the left of c and Recipe

f ′ (x) ≤ 0 throughout some interval (c, b) to the right of c, then x = c Multivariate


Optimization
is a local maximum point for f .
Optimization with Two
Variables
Theorem Constrained
If f ′ (x) ≤ 0 throughout some interval (a, c) to the left of c and Optimization

f ′ (x) ≥ 0 throughout some interval (c, b) to the right of c, then x = c Matrices and Vectors

Vectors
is a local minimum point for f.
Matrix Multiplication

Theorem
If f ′ (x) > 0 both throughout some interval (a, c) to the left of c and
throughout some interval (c, b) to the right of c, then x = c is not a
local extreme point for f . The same conclusion holds if f ′ (x) < 0 on
both sides of c. Aalborg University Business
School
69 Denmark
Local and Global Extreme Points
The First-Derivative Test

Quantitative Methods

Douglas Turatti

Information

Introduction

27 Local and Global


Extreme Points

Recipe

Multivariate
Optimization

Optimization with Two


Variables

Constrained
Optimization

Matrices and Vectors

Vectors

▶ The signs of the derivative in an interval around the critical point Matrix Multiplication

allows us to determine whether it is a maximum, minimum, or


saddle point.

Aalborg University Business


School
69 Denmark
Local and Global Extreme Points
The Second-Derivative Test

Quantitative Methods

Douglas Turatti

Information

Introduction
▶ The first derivative test requires knowing the sign of f ′ (x) at 28 Local and Global
Extreme Points
points both to the left and to the right of the given stationary point.
Recipe

Multivariate
▶ However, there is a more convinient way to determine whether a Optimization
critical point is a maximum or minimum or inflection point. Optimization with Two
Variables
Theorem Constrained

If f ′ (c) = 0 and f ′′ (c) < 0 then x = c is a strict local maximum point. Optimization

Matrices and Vectors

If f ′ (c) = 0 and f ′′ (c) > 0 then x = c is a strict local minimum point. Vectors

′ ′′ Matrix Multiplication
If f (c) = 0 and f (c) = 0 then then x = c is inconclusive.

Aalborg University Business


School
69 Denmark
Local and Global Extreme Points
Inflection Points

Quantitative Methods

Douglas Turatti

Information
▶ Recall that we defined a twice differentiable function f (x)
Introduction
to be concave (convex) in an interval I if f ′′ (x) ≤ 0(≥) for 29 Local and Global
all x in I. Extreme Points

Recipe
▶ Points at which a function changes from being convex to Multivariate
being concave, or vice versa, are called inflection points. Optimization

Optimization with Two


▶ Let a function twice differentiable f (x), Variables

Constrained
Definition Optimization

The point c is called an inflection point for the function f if there Matrices and Vectors

Vectors
exists an interval (a, b) about c such that:
Matrix Multiplication

f ′′ (x) ≥ 0 ∈ (a, c) and f ′′ (x) ≤ 0 ∈ (c, b) or


f ′′ (x) ≤ 0 ∈ (a, c) and f ′′ (x) ≥ 0 ∈ (c, b).

Aalborg University Business


School
69 Denmark
Local and Global Extreme Points
Inflection Points

Quantitative Methods

Douglas Turatti

Information

Introduction
30 Local and Global
Extreme Points

Recipe

Multivariate
Optimization

Optimization with Two


Variables

Constrained
Optimization

Matrices and Vectors

Vectors

Matrix Multiplication

▶ x = c is an inflection point if f ′′ (x) changes sign at x = c. Note


that f ′ () is not zero at this point.
Aalborg University Business
School
69 Denmark
Recipe for Single-Variable Optimization
Quantitative Methods

Douglas Turatti
▶ We can now summarize the procedure to find a maximum or
minimum of a single-variable function. Information

1. Find critical values, i.e those that satisfy f ′ (c) = 0 for c ∈ X . Introduction

Local and Global


Extreme Points
2. If the domain X is restricted in a bounded interval, i.e. 31 Recipe
X ∈ [a, b] then check f (a) and f (b) for possible boundary Multivariate
optimum. If not, c is the interior local optimum. Optimization

Optimization with Two


Variables
3. Check second derivative:
Constrained
if f ′′ (c) < 0, this is a local maximum. If f ′′ (c) < 0 in the Optimization
whole domain then c is the global maximum. Matrices and Vectors

if f ′′ (c) > 0, this is a local minimum. If f ′′ (c) > 0 in the Vectors

whole domain then c is the global minimum. Matrix Multiplication

if f ′′ (c) = 0, the point is inconclusive. It usually means an


inflection point.

▶ This is an exact procedure, i.e if the local minimum (maximum)


can be found, it is the correct one. Aalborg University Business
School
69 Denmark
Multivariate Optimization
Introduction

Quantitative Methods

Douglas Turatti

Information

Introduction

▶ Most optimization problems of interest involve several Local and Global


Extreme Points
variables. Recipe
32 Multivariate
▶ For example, an asset manager usually needs to find the Optimization

optimal weight for several assets in the portfolio. Optimization with Two
Variables

Constrained
▶ However, optimization with several variables is a Optimization

matrix-based topic. This means that it is necessary a good Matrices and Vectors

Vectors
prior understanding of matrix algebra. Thus, here we focus
Matrix Multiplication
only on two-variable and 3-variable optimization.

Aalborg University Business


School
69 Denmark
Multivariate Optimization
Example in Finance

Quantitative Methods

Douglas Turatti
▶ Suppose a portfolio with 3 assets, A, B and C. Let σi2 represents
Information
the variance of asset i. The total variance of a portfolio with
Introduction
returns given by E[P] = wa E(Ra ) + wb E[Rb ] + wC E[RC ] is,
Local and Global
Extreme Points
σP2 = wa σa2 + wb σb2 + wc σc2 + (16) Recipe

+2wa wb σa σb ρAB + 2wa wc σa σc ρAC + 2wb wc σb σc ρBC 33 Multivariate


Optimization

Optimization with Two


▶ Note that we can write this problem as a two-variable Variables

optimization as wc = 1 − wa − wb . Constrained
Optimization

Matrices and Vectors


▶ To find the minimum variance portfolio (MVP), we need to find
the arguments (wa∗ , wb∗ , 1 − wa∗ − wb∗ ) which will minimize the
Vectors

Matrix Multiplication
above function.

▶ The procedure is more complex because we have 2 arguments,


so thus we have to compute 2 partial first-order derivatives and 3
second-order derivatives.
Aalborg University Business
School
69 Denmark
Optimization with Two Variables
Necessary Conditions

Quantitative Methods

Douglas Turatti

Information
▶ Recall that for the function f (x) a critical point satisfies f ′ (x0 ) = 0.
Introduction

Local and Global


▶ For two variables, the same idea applies. Let the function f (x, y ). Extreme Points

A critical point must have partial derivatives equal to 0 to both the Recipe

x-axis and y-axis direction. Multivariate


Optimization
34 Optimization with Two
Theorem Variables

A differentiable function z = f (x, y ) can have a maximum or Constrained


Optimization
minimum at an interior point (x0 , y0 ) only if it is a stationary Matrices and Vectors
point—that is, if the point (x, y ) = (x0 , y0 ) satisfies the two Vectors
equations Matrix Multiplication
∂f ∂f
= 0, =0 (17)
∂x ∂y

Aalborg University Business


School
69 Denmark
Optimization with Two Variables
Necessary Conditions

Quantitative Methods

Douglas Turatti

Information

Introduction

Local and Global


Extreme Points

Recipe

Multivariate
Optimization
35 Optimization with Two
Variables

Constrained
Optimization

Matrices and Vectors

Vectors

Matrix Multiplication

▶ The Point P is the maximum of this function. At the maximum


both partial derivatives must be equal to 0.

Aalborg University Business


School
69 Denmark
Optimization with Two Variables
Necessary Conditions. Example

Quantitative Methods

Douglas Turatti

Information
▶ The function f is defined for all (x, y) by,
Introduction

2 2 Local and Global


f (x, y ) = −2x − 2xy − 2y + 36x + 42y − 158 (18) Extreme Points

Recipe
Find the maximum. Multivariate
Optimization

▶ Find partial derivatives and solve them for 0, 36 Optimization with Two
Variables

Constrained
∂f
= −4x − 2y + 36 = 0, (19) Optimization
∂x Matrices and Vectors
∂f Vectors
= −2x − 4y + 42 = 0 (20)
∂x Matrix Multiplication

▶ Solving this system we have that (x, y ) = (5, 8). This coordinate
is the only critical point in the system.

Aalborg University Business


School
69 Denmark
Optimization with Two Variables
Sufficient Conditions

Quantitative Methods

Douglas Turatti

Information
▶ Recall that the second derivative tells us if the critical point is a Introduction
maximum or minimum. Local and Global
Extreme Points

▶ The maximum point x0 is obtained in a concave function and Recipe

f ′′ (x0 ) < 0. Multivariate


Optimization
37 Optimization with Two
▶ The minimum point is obtained in a convex function and Variables

f ′′ (x0 ) > 0. Constrained


Optimization

Matrices and Vectors


▶ For functions of two variables, the test for concavity or convexity
Vectors
relies on the second-order partial derivatives.
Matrix Multiplication

▶ In addition, we have to verify the second-order cross partial


derivatives.

Aalborg University Business


School
69 Denmark
Optimization with Two Variables
Sufficient Conditions

Quantitative Methods

Douglas Turatti

Information
Theorem Introduction
Suppose that (x0 , y0 ) is an interior stationary point for the Local and Global
function f (x, y ). If Extreme Points

Recipe
∂2f ∂2f ∂2f ∂2f 2
∂ f 2
∂x 2
≤ 0, ∂y 2
≤ 0, and ∂x 2 ∂y 2
− ( ∂x∂y ) ≥0 Multivariate
Optimization
38 Optimization with Two
Then (x0 , y0 ) is a maximum point for f (x, y ). Variables

Constrained
Theorem Optimization

Matrices and Vectors


Suppose that (x0 , y0 ) is an interior stationary point for a the
Vectors
function f (x, y ). If
Matrix Multiplication

∂2f ∂2f ∂2f ∂2f ∂2f 2


∂x 2
≥ 0, ∂y 2
≥ 0, and ∂x 2 ∂y 2
− ( ∂x∂y ) ≥0
Then (x0 , y0 ) is a minimum point for f (x, y ).

Aalborg University Business


School
69 Denmark
Optimization with Two Variables
Sufficient Conditions

Quantitative Methods

Douglas Turatti

Information
Remark Introduction
If a twice differentiable function z = f (x, y ) satisfies the first Local and Global
inequalities,then it is called concave, whereas it is called Extreme Points

convex if it satisfies the second inequalities. Recipe

Multivariate
▶ Example: Let’s show that the previous example is a maximum. Optimization
39 Optimization with Two
Variables
▶ Solution: Constrained
Optimization
We have found that ∂f /∂x = −4x − 2y + 36 = 0 and Matrices and Vectors
′′ ′′
∂f /∂x = −2x − 4y + 42 = 0. Thus f11 = −4, f22 = −2 and Vectors
′′
f12 = −4, Matrix Multiplication

′′ ′′ ′′ ′′ ′′ 2
f11 < 0, f22 < 0, f11 f22 − (f12 ) = 16 − 4 = 12 ≥ 0 (21)

Thus, we have a maximum.

Aalborg University Business


School
69 Denmark
Constrained Optimization
Introduction

Quantitative Methods

Douglas Turatti

Information

Introduction

▶ Constrained optimization is the process of optimizing an Local and Global


Extreme Points
objective function with respect to some variables in the presence Recipe
of constraints on those variables. Multivariate
Optimization

▶ There are several of examples in economics, finance, and Optimization with Two
Variables
statistics. 40 Constrained
Optimization

▶ Here, we will study only simple cases and methods. The topic is Matrices and Vectors

complex and a full characterization is out of the scope. We will Vectors

not study second-order conditions. Matrix Multiplication

Aalborg University Business


School
69 Denmark
Constrained Optimization
Cases in Finance : MVP

Quantitative Methods

Douglas Turatti
▶ The portfolio problem (with N assets) of finding the MVP is a
constrained optimization as we generally want 0 ≤ wa ≤ 1, . . . , Information

0 ≤ wN ≤ 1, and wa + · · · + wN = 1. Introduction

Local and Global


Extreme Points
▶ For the 3-variable case
Recipe

arg minwa ,wb ,wc σP2 = wa σa2 + wb σb2 + wc σc2 + Multivariate


(22)
Optimization

+2wa wb σa σb ρAB + 2wa wc σa σc ρAC + 2wb wc σb σOptimization


c ρBC with Two
Variables

41 Constrained
will be subject to Optimization

Matrices and Vectors


subject to 0 ≤ wa ≤ 1, 0 ≤ wb ≤ 1, 0 ≤ wc ≤ 1 Vectors
and wa + wb + wc = 1. Matrix Multiplication

▶ The restriction wa + wb + wc = 1 can be enforced on the


problem. However, the restrictions 0 ≤ wa ≤ 1, 0 ≤ wb ≤ 1,
0 ≤ wc ≤ 1 are generally dificult to be satisfied in larger portfolios
without using restrictions.
Aalborg University Business
School
69 Denmark
Constrained Optimization
Case in Finance : Efficient Frontier

Quantitative Methods

Douglas Turatti
▶ In portfolio theory, we are interested in the efficient frontier.
Information

Introduction
▶ The eficient frontier has the best possible expected level of return
Local and Global
for its level of risk, which is represented by the standard deviation Extreme Points

of the portfolio’s return. Recipe

Multivariate
Optimization
▶ Note that diferently from the 2-asset case, larger portfolios can
Optimization with Two
have same expected return by combining assets using different Variables
proportions. 42 Constrained
Optimization

▶ In other words, several weight vectors can yield same expected Matrices and Vectors

Vectors
returns.
Matrix Multiplication

▶ The optimization problem is then finding the lowest variance for a


given level of return.

▶ This is a contrained optimization problem.


Aalborg University Business
School
69 Denmark
Constrained Optimization
Case in Finance : Efficient Frontier

Quantitative Methods

Douglas Turatti

Information
▶ For the 3-variable case the problem can be written as
Introduction

Local and Global


arg minwa ,wb ,wc σP2 = wa σa2 + wb σb2 + wc σc2 + (23)
Extreme Points

Recipe
+2wa wb σa σb ρAB + 2wa wc σa σc ρAC + 2wb wc σb σc(24)
ρBC
Multivariate
Optimization

this will be subject to Optimization with Two


Variables
43 Constrained
subject to 0 ≤ wa ≤ 1, 0 ≤ wb ≤ 1, 0 ≤ wc ≤ 1 Optimization

and wa + wb + wc = 1 Matrices and Vectors

Vectors
and wa ra + wb rb + wc rc = Rp Matrix Multiplication

▶ The solution are the weights which produce a portfolio with


lowest variance for a given level of expected returns.

Aalborg University Business


School
69 Denmark
Constrained Optimization
Lagrange Multiplier

Quantitative Methods

Douglas Turatti
▶ We start with the problem of maximizing (or minimizing) a
Information
function f (x, y ) of two variables, when x and y are restricted to Introduction
satisfy an equality constraint g(x, y ) = c, Local and Global
Extreme Points

max (min) f (x, y ) subject to g(x, y ) = c (25) Recipe

Multivariate
▶ The basic method for this problem is the Lagrange Multiplier. Optimization

Optimization with Two


Variables
▶ The basic idea of the method is to convert a constrained problem 44 Constrained
into a form such that the derivative test of an unconstrained Optimization

problem can still be applied. Matrices and Vectors

Vectors

▶ We define the Lagrangian function L as, Matrix Multiplication


L(x, y ) = f (x, y ) − λ g(x, y ) − c (26)

▶ Note that the term g(x, y ) − c applies the restriction.
Aalborg University Business
School
69 Denmark
Constrained Optimization
Lagrange Multiplier

Quantitative Methods

Douglas Turatti

Information

Introduction

▶ Note that the term g(x, y ) − c applies the restriction and it has Local and Global
Extreme Points
been multiplied by a parameter λ. Recipe

Multivariate
▶ Note that L(x, y ) = f (x, y ) for all (x, y ) that satisfy the constraint Optimization

g(x, y ) = c. Optimization with Two


Variables
45 Constrained
▶ It is possible to think of λ as a penalty parameter, which reduces Optimization

or increases the objective function. Matrices and Vectors

Vectors

▶ The optimization is then carried out on the Lagrangian function. Matrix Multiplication

Aalborg University Business


School
69 Denmark
Constrained Optimization
Lagrange Multiplier Method

Quantitative Methods

Douglas Turatti

Information

Introduction

Local and Global


Extreme Points
▶ The Lagrange multiplier λ is a constant. The partial derivatives of Recipe

L(x, y ) w.r.t to x and y are, Multivariate


Optimization

L(x, y )′1 = f1′ (x, y ) − λg1′ (x, y ) (27) Optimization with Two
Variables

L(x, y )′2 = f2′ (x, y ) − λg2′ (x, y ) (28) 46 Constrained


Optimization

Matrices and Vectors


▶ We can now state the full method.
Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Constrained Optimization
Lagrange Multiplier Method

Quantitative Methods

Douglas Turatti

▶ To find the possible solution to: Information

Introduction
max (min) f (x, y ) subject to g(x, y ) = c (29) Local and Global
Extreme Points

▶ Step I: write the Lagrangian function, Recipe

Multivariate
 Optimization
L(x, y ) = f (x, y ) − λ g(x, y ) − c (30)
Optimization with Two
Variables
▶ Step II: Differentiate L(x, y ) w.r.t. x and y, and equate the partial 47 Constrained
Optimization
derivatives to 0.
Matrices and Vectors

L(x, y )′1 = f1′ (x, y ) − λg1′ (x, y ) = 0 (31) Vectors

Matrix Multiplication
L(x, y )′2 = f2′ (x, y ) − λg2′ (x, y ) =0 (32)

▶ Step III: The two equations in (II), together with the constraint,
yield a system of three equations.

Aalborg University Business


School
69 Denmark
Constrained Optimization
Lagrange Multiplier Method

Quantitative Methods

Douglas Turatti

Information

Introduction
▶ Step III: The two equations in (II), together with the constraint,
Local and Global
yield a system of three equations. Extreme Points

Recipe
L(x, y )′1 = f1′ (x, y ) − λg1′ (x, y ) = 0 (33) Multivariate

L(x, y )′2 f2′ (x, y ) − λg2′ (x, y ) = 0


Optimization
= (34)
Optimization with Two
g(x, y ) = c (35) Variables
48 Constrained
Optimization
These are the first-order conditions.
Matrices and Vectors

Vectors
▶ Step IV : Solve these three equations simultaneously for the
Matrix Multiplication
three unknowns x, y , and λ. These triples (x, y , λ) are the
solution candidates, at least one of which solves the problem.

Aalborg University Business


School
69 Denmark
Constrained Optimization
Lagrange Multiplier Method. Second-Order Tests

Quantitative Methods

Douglas Turatti
▶ Consider the problem,
Information
max (min) f (x, y ) subject to g(x, y ) = c (36) Introduction

Local and Global


▶ The first-order conditions are, Extreme Points

Recipe

L(x, y )′1 = f1′ (x, y ) − λg1′ (x, y ) = 0 (37) Multivariate


Optimization
L(x, y )′2 = f2′ (x, y ) − λg2′ (x, y ) = 0 (38) Optimization with Two
Variables
g(x, y ) = c (39) 49 Constrained
Optimization
▶ As a general rule we also have to check the second-order Matrices and Vectors

conditions. We will not discuss these conditions here as they Vectors

require matrix algebra. Matrix Multiplication

▶ As an informal rule of thumb in financial applications, we usually


have only a very limited pool of candidates. You can define the
minimum and maximum by evaluating the objective function. In
many cases, you only have 1 candidate. Aalborg University Business
School
69 Denmark
Constrained Optimization
Multivariate Lagrange Multiplier Method

Quantitative Methods

Douglas Turatti

▶ The method also works for problems in high-dimension. The Information


typical problem with n variables can be written in the form, Introduction

Local and Global


max (min) f (x1 , . . . , xn ) subject to g(x1 , . . . , xn ) = c (40) Extreme Points

Recipe
▶ We write the Lagrangian, Multivariate
Optimization
 
Optimization with Two
L(x1 , . . . , xn ) = f (x1 , . . . , xn ) − λ g(x1 , . . . , xn ) − c (41) Variables
50 Constrained
Optimization
▶ The solution proceeds the same way by taking partial derivatives Matrices and Vectors

and solving this n + 1 system. Vectors

Matrix Multiplication
▶ We will not solve these problems as they require matrix algebra.

▶ Some problems in finance with 3 variables can se solved without


matrix algebra.
Aalborg University Business
School
69 Denmark
Matrices and Matrix Operations
Matrices

Quantitative Methods
▶ A matrix is simply a rectangular array of numbers considered as
Douglas Turatti
one mathematical object.
Information

▶ When there are m rows and n columns in the array, we have an Introduction

m-by-n matrix (written as m × n). Local and Global


Extreme Points

Recipe
▶ We usually denote a matrix with bold capital letters such as A, B,
Multivariate
and so on. Optimization

Optimization with Two


Variables
▶ A matrix of order m × n can be represented as
Constrained
Optimization
a11 a12 . . . a1n
 
51 Matrices and Vectors
 a21 a22 . . . a2n 
Vectors
A= . (42)

.. .. 

 .. . .  Matrix Multiplication

am1 am2 . . . amn

▶ aij denotes the element in the i-th row and the j-th column.

▶ We can write a matrix compactly as A = (aij )m×n . This means a Aalborg University Business

matrix of order m × n. 69
School
Denmark
Matrices and Matrix Operations
Vectors

Quantitative Methods

Douglas Turatti
▶ A matrix with either only one row or only one column is called a
vector. Information

Introduction
▶ A row n-dimensional vector has only one row Local and Global
Extreme Points
 Recipe
a = a11 a12 . . . a1n (43)
Multivariate
Optimization
▶ A column n-dimensional vector has only one column Optimization with Two
Variables

b11
 
Constrained
Optimization
b21 
b= .  (44) 52 Matrices and Vectors
 
 ..  Vectors

bn1 Matrix Multiplication

▶ It is usual to denote row or column vectors by small bold letters


like x or y rather than capital letters.

▶ Scalar= a matrix of dimension 1 × 1, i.e. a number.


Aalborg University Business
School
69 Denmark
Matrices and Matrix Operations
Matrix Operations

Quantitative Methods
▶ Equality between matrices of the same order means that all Douglas Turatti
elements of the matrices are equal. A and B of size m × n are
Information
equal if and only if
Introduction

Local and Global


aij = bij for all i = 1, 2 . . . m j = 1, 2 . . . , n (45) Extreme Points

Recipe
▶ Let A and B the matrices of same order,m × n. The sum A + B is Multivariate
defined as Optimization

Optimization with Two


Variables
A + B = (aij )m×n + (bij )m×n = (aij + bij )m×n (46)
Constrained
Optimization
in other words, the sum of the elements of the matrices. The 53 Matrices and Vectors
difference, Vectors

Matrix Multiplication
A − B = (aij )m×n − (bij )m×n = (aij − bij )m×n (47)

▶ The multiplication of a matrix by a scalar, for example 2A is


defined as each element multiplied by the scalar

2A = (2aij )m×n (48) Aalborg University Business


School
69 Denmark
Matrices and Matrix Operations
Matrix Operations : Example

Quantitative Methods

Douglas Turatti

Information
▶ Consider the matrices Introduction
    Local and Global
1 3 2 1 0 2 Extreme Points
A= B= (49)
5 −3 1 0 1 2 Recipe

Multivariate
▶ Calculate A + B, 3A, and A − B Optimization

Optimization with Two


Variables
   
1+1 3+0 2+2 2 3 4
A+B = = (50) Constrained
5 + 0 −3 + 1 1 + 2 5 −2 3 Optimization

    54 Matrices and Vectors


3×1 3×3 3×2 3 9 6 Vectors
3A = = (51)
3 × 5 3 × −3 3 × 1 15 −9 3 Matrix Multiplication
   
1−1 3−0 2−2 0 3 0
A−B = = (52)
5 − 0 −3 − 1 1 − 2 5 −4 −1

Aalborg University Business


School
69 Denmark
Matrices and Matrix Operations
Rules of sum of matrices

Quantitative Methods

Douglas Turatti

Information

▶ (A + B) + C = A + (B + C) Introduction

Local and Global


Extreme Points
▶ A+B =B+A
Recipe

Multivariate
▶ A+0=A Optimization

Optimization with Two


Variables
▶ A + (−A) = 0.
Constrained
Optimization
▶ (α + β)A = αA + βA 55 Matrices and Vectors

Vectors
▶ α(A + B) = αA + αB Matrix Multiplication

where 0 = (0)m×n , i.e. a matrix where all elements are 0.

Aalborg University Business


School
69 Denmark
Vectors
Quantitative Methods

Douglas Turatti

Information
▶ Vectors can be row or column vectors. For example, a Introduction

generic row vector Local and Global


Extreme Points

Recipe
a = (a1 , a2 , a3 , . . . , an ) (53)
Multivariate
Optimization
▶ The elements ai i = 1, 2, . . . , n are called components (or Optimization with Two
Variables
coordinates) of the vector. Constrained
Optimization

▶ n-vector: a vector with n elements. Matrices and Vectors


56 Vectors

▶ Any vector is a point in a n-dimensional space. For Matrix Multiplication

example, a vector with 3 elements is a point in the


triimensional space defined by real numbers.

Aalborg University Business


School
69 Denmark
Operation with Vectors
Quantitative Methods

Douglas Turatti

▶ Vector are just a special type of matrix, the algebraic Information

Introduction
operations introduced for matrices are equally valid for
Local and Global
vectors. Extreme Points

Recipe

▶ Two n-vectors a and b are equal if and only if all their Multivariate
Optimization
corresponding components are equal. Optimization with Two
Variables

▶ If a and b are two n-vectors, their sum, denoted by a + b, is Constrained


Optimization
the n-vector obtained by adding each component of a to Matrices and Vectors
the corresponding component of b. 57 Vectors

Matrix Multiplication
▶ If a is an n-vector and t is a real number, we define ta as
the n-vector whose components are t times the
corresponding components in a.

Aalborg University Business


School
69 Denmark
Operation with Vectors
Quantitative Methods

Douglas Turatti

▶ If a and b are two n-vectors and t and s are real numbers, Information

the n-vector ta + sb is said to be a linear combination of a Introduction

Local and Global


and b.       Extreme Points
a1 b1 ta1 + sb1 Recipe
 a2  b2  ta2 + sb2  Multivariate
t  . +s .  =  (54)
     
.. Optimization
 ..   ..  

.  Optimization with Two
Variables
an bn tan + sbn Constrained
Optimization

▶ Affine transformation: If a and b are two n-vectors and t Matrices and Vectors

and (1-t) are real numbers, the n-vector ta + (1 − t)b is 58 Vectors

said to be an affine combination of a and b. Matrix Multiplication

▶ Affine transformation imposes that t + s = 1, and is a


special case of linear combination.

Aalborg University Business


School
69 Denmark
Operation with Vectors
Dot product

Quantitative Methods

Douglas Turatti

Information

Introduction

Local and Global


Extreme Points

Definition Recipe

Let a = (a1 , a2 , . . . , an ) be a n × 1 matrix, and Multivariate


Optimization
b = (b1 , b2 , . . . , bn )′ be a a n × 1 matrix. The inner (dot) Optimization with Two
product is the scalar product of these vectors, and is defined as Variables

Constrained
Optimization
a′ b = a1 b1 + a2 b2 + · · · + an bn (55) Matrices and Vectors
59 Vectors

Matrix Multiplication

Aalborg University Business


School
69 Denmark
Geometry of vectors
Orthogonality of Vectors

Quantitative Methods

◦ Douglas Turatti
▶ When the angle between two vectors a and b is 90 , they
are said to be orthogonal, and represented by a ⊥ b. Information

Introduction

▶ When 2 vectors are orthogonal their dot product is equal Local and Global
Extreme Points
to 0. Recipe

a⊥b ⇐⇒ a.b = 0 (56) Multivariate


Optimization

Optimization with Two


▶ A known application of this result is the covariance Variables

coefficient. For two demeaned data series, when their dot Constrained
Optimization
product is equal to 0, we say the series are uncorrelated Matrices and Vectors
(i.e. orthogonal). Recall the covariance coefficient 60 Vectors
between X and Y, Matrix Multiplication

E[xy ] = 1/N(x1 y1 + x2 y2 + · · · + xn yn ) (57)

for demeaned X and Y, i.e. x = X − µx where µx is the


mean of x. Aalborg University Business
School
69 Denmark
Matrix Multiplication
Quantitative Methods

Douglas Turatti
▶ The rules for adding or subtracting matrices are quite
Information
natural and simple. Introduction

Local and Global


▶ The rule for matrix multiplication, however, is more subtle. Extreme Points

Matrix multiplication is not element-wise Recipe

Multivariate
multiplication. Optimization

Optimization with Two


▶ To understand the idea behind matrix multiplication, Variables

Constrained
consider the following system of equations, Optimization

Matrices and Vectors


a11 y1 + a12 y2 = z1 (58) Vectors

a21 y1 + a22 y2 = z2 (59) 61 Matrix Multiplication

▶ Matrix multiplication allows us to write this system as


AY = Z , where AY is the matrix multiplication of matrices
A and Y.
Aalborg University Business
School
69 Denmark
Matrix Multiplication
Quantitative Methods

Douglas Turatti
▶ How can this system be converted into the matrix form.
Information

▶ Let’s suppose the answer is Introduction

Local and Global


     Extreme Points
a11 a12 y1 z1 Recipe
= (60)
a21 a22 y2 z2 Multivariate
Optimization

▶ This answer must recover the original system. This means Optimization with Two
Variables

that the matrix multiplication Constrained


Optimization
     Matrices and Vectors
a11 a12 y1 a11 y1 + a12 y2
= (61) Vectors
a21 a22 y2 a21 y1 + a22 y2 62 Matrix Multiplication

which is a 2 × 1 vector.

▶ First conclusion: 2 × 2 matrix multiplied by a 2 × 1 vector


must yield a 2 × 1 vector.
Aalborg University Business
School
69 Denmark
Matrix Multiplication
Quantitative Methods
▶ How to perform the matrix multiplication? In the example, Douglas Turatti

the result is a 2 × 1 vector. Information

Introduction
▶ We recover the first element of original system by Local and Global
Extreme Points
multiplying the first row of the matrix A by the vector Y .
Recipe
and adding them. Multivariate
Optimization

▶ The first element of the first row of matrix A multiplies the Optimization with Two
Variables
first element of the vector. The second element of the first Constrained
row of matrix A multiplies the second element of the Optimization

vector. We then sum these elements Matrices and Vectors

Vectors
63 Matrix Multiplication
a11 y1 + a12 y2 (62)

which is a scalar.

▶ We recover the second element of the original system in


the same way. Aalborg University Business
School
69 Denmark
Matrix Multiplication
Definition

Quantitative Methods
Definition Douglas Turatti

Suppose that A = (aij )m×n and that B = (bij )n×p . Then the product Information
C = AB is the m × p matrix C = (cij )m × p, whose element in the i-th Introduction
row and the j-th column is the inner product Local and Global
Extreme Points
n
X Recipe
cij = air brj = ai1 b1j + ai2 b2j + · · · + ain bnj (63) Multivariate
r =1 Optimization

Optimization with Two


This means that the element cij is the dot product of the i row of A an Variables

the j column of B. Constrained


Optimization
The matrix multiplication exists if the number of columns in
Matrices and Vectors
matrix A is equal to the number of rows in matrix B, and the Vectors
resulting matrix has the number of rows of A and the number of 64 Matrix Multiplication
columns of B.

Aalborg University Business


School
69 Denmark
Matrix Multiplication
Example

Quantitative Methods

Douglas Turatti

▶ Consider the matrices A and B.Is the product AB defined? Information

Introduction
If so, compute the matrix product AB. What about the
Local and Global
product BA? Extreme Points

Recipe
   
0 1 2 3 2 Multivariate
Optimization
A = 2 3 1 B =  1 0 (64) Optimization with Two
4 −1 6 −1 1 Variables

Constrained
Optimization
▶ Let’s start with the product AB.
Matrices and Vectors
For this product to be defined, we need that the number of Vectors
columns in matrix A is equal to the number of rows in 65 Matrix Multiplication

matrix B, which is satisfied in this case. The matrix


multiplication is exists as A = 3 × 3 B = 3 × 2. The
product AB is then defined as 3 × 2.

Aalborg University Business


School
69 Denmark
Matrix Multiplication
Example

Quantitative Methods

Douglas Turatti

Information
▶ The matrix multiplication is calculated then as
Introduction
   Local and Global
0 1 2 3 2 Extreme Points

2 3 1  1 0 = Recipe

4 −1 6 −1 1 Multivariate
Optimization
 
0 × 3 + 1 × 1 + 2 × −1 0×2+1×1+2×1 Optimization with Two
Variables
 2 × 3 + 3 × 1 + 1 × −1 1×2+3×0+1×1  Constrained
Optimization
4 × 3 + −1 × 1 + 6 × −1 4 × 2 + −1 × 0 + 6 × 1
Matrices and Vectors

▶ What about the product BA? Vectors


66 Matrix Multiplication
The product BA is not defined because the number of
columns in B (= 2) is not equal to the number of rows in A
(= 3). So the matrix BA does not exist.

Aalborg University Business


School
69 Denmark
Matrix Multiplication
Quantitative Methods

Douglas Turatti

Information

Introduction

Local and Global


Extreme Points

Remark Recipe

Multivariate
Differently from scalars, matrix multiplication is not Optimization

commutative. In the previous example, AB was defined but BA Optimization with Two
Variables
was not. Even in cases in which AB and BA are both defined, Constrained
they are usually not equal. When we write AB, we say that we Optimization

premultiply B by A, whereas in BA we postmultiply B by A. Matrices and Vectors

Vectors
67 Matrix Multiplication

Aalborg University Business


School
69 Denmark
Matrix Multiplication
Rules for Matrix Multiplication

Quantitative Methods

Douglas Turatti

Information

We have seen that matrix multiplication is in general not Introduction

commutative, i.e. AB ̸= BA. However, some rules from scalars Local and Global
Extreme Points
still apply: Recipe

1. Associate rule: (AB)C = A(BC) Multivariate


Optimization

Optimization with Two


2. Left distributive law: A(B + C) = AB + AC Variables

Constrained
Optimization
3. Right distributive law: A(B + C) = AC + BC Matrices and Vectors

Vectors
4. (αA)B = A(αB) = α(AB), where α is a scalar. 68 Matrix Multiplication

5. Power matrix: An = A × A · · · × A

Aalborg University Business


School
69 Denmark
Matrix Multiplication
The Identity Matrix

Quantitative Methods

Douglas Turatti
▶ The identity matrix is the matrix equivalent to the scalar 1.
Information

Introduction
▶ The identity matrix of order n, denoted by In , is the n × n matrix
Local and Global
having ones along the main diagonal and zeros elsewhere Extreme Points

Recipe
1 0 ... 0
 
Multivariate
0 1 . . . 0 Optimization

In =  . . . (65)

. . ... 

Optimization with Two
 .. ..

Variables

Constrained
0 0 ... 1 Optimization

Matrices and Vectors


▶ For every n × n matrix A the following holds Vectors
69 Matrix Multiplication
AIn = In A = A (66)

▶ The identity matrix is always a square matrix.

▶ Square matrix: Matrix with same number of rows and columns.


Aalborg University Business
School
69 Denmark

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy