0% found this document useful (0 votes)
59 views63 pages

Asymp2 Analogy 2006-04-05 Mms

The document discusses four common econometric estimation methods and how they relate to the analogy principle. It outlines the analogy principle, describes how it ensures consistency of estimators, and provides an illustrative example. The document also discusses different types of convergence for non-stochastic functions and how uniform convergence is needed for properties to carry over to limit functions.

Uploaded by

Samu Lascar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views63 pages

Asymp2 Analogy 2006-04-05 Mms

The document discusses four common econometric estimation methods and how they relate to the analogy principle. It outlines the analogy principle, describes how it ensures consistency of estimators, and provides an illustrative example. The document also discusses different types of convergence for non-stochastic functions and how uniform convergence is needed for properties to carry over to limit functions.

Uploaded by

Samu Lascar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Analogy Principle

Asymptotic Theory — Part II

James J. Heckman
University of Chicago

Econ 312
This draft, April 5, 2006
Consider four methods:

1. Maximum Likelihood Estimation (MLE)

2. (Nonlinear) Least Squares

3. Methods of Moments

4. Generalized Method of Moments (GMM)

These methods are not always distinct for a particular problem.


Consider the classical normal linear regression model:

\w = [w  + Xw

1
Under standard assumptions

1. Xw  Q(0>  2x ); Xw i.i.d.;

2. [w non-stochastic; and
S
[w [w0
3. W
full rank.

OLS is all four rolled into one.

In this lecture we will show how one basic principle – the


Analogy Principle – underlies all of these modern econometric
methods.

2
1 Analogy Principle: The Large Sam-
ple Version
This is originally due to Karl Pearson or Goldberger.

The intuition behind the analogy principle is as follows:

‘Suppose we know some properties that are satisfied for the


“true parameter" in the population. If we can find a para-
meter value in the sample that causes the sample to mimic
the properties of the population, we might use this parameter
value to estimate the true parameter.’

3
The main points of this principle are set out in §1.1. The
conditions for consistency of the estimator are discussed in
§1.2. In §1.3, an abstract example with a finite parameter set
illustrates the application of the analog principle and the role of
regularity conditions in ensuring consistency of the estimator.

4
1.1 Outline of the Analog Principle
The ideas constituting the analog principle can be grouped
under four steps:

1. The model. Suppose there exists a ‘true model’ in the


population – P(\> [>  0 ) = 0 – an implicit equation.
0 is a true parameter vector, and the model is defined
for other values (at least some other) of  5 X.
2. Criterion function. Based on the model, construct a ‘cri-
terion function’ of model and data:
T(\> [> )
which has property S in the population when  = 0 (the
‘true’ parameter vector).

5
3. Analog in sample. In the sample, construct an ‘analog’
to T, TW (\> [> ), which has the following properties:
a. s. unif.
(a) (; 5 X) TW (\> [> ) $ T(\> [> ),
where W is the sample size; and
(b) TW (\> [> ) mimics in sample properties of T(\> [> )1
in the population.

4. The estimator. Let ˆW be the estimator of 0 in sample W


formed from the analog in sample, which causes sample
TW (.) to have the property S .

1
Hereafter, we shall suppress the dependence of the criterion function
T and the sample analog TW on the data (\> [) for notational conve-
nience.

6
1.2 Consistency and Regularity
ˆ W is
Definition 1 (Consistency) We say that an estimator 
a consistent estimator of parameter  0 if
s
ˆ W $ 0.


In general, to prove that the estimator formed using the analog


principle ˆW is consistent, we need to make the following two
assumptions, which are referred to as the regularity conditions:

1. Identification condition. Generally this states that only


0 causes T to possess property S , at least in some neigh-
borhood of 0 ; i.e. 0 must be at least locally identified.

7
2. Uniform convergence of the analog function. We need the
sample analog TW (ˆw ) to converge ‘nicely’. In particular,
we need the convergence of the sample analog criterion
function to the criterion function – (; 5 X) TW () $
T() – to be uniform.

8
The first condition ensures that 0 can be identified. If there
is more than one value of  that causes T to have property S ,
then we cannot be sure that only one value of  will cause TW
to assume property S in the sample. In this case we may not
be able to determine what ˆW estimates.

The second condition is a technical one that ensures that if


TW has a property such as continuity or dierentiability, this
property is inherited by T.

Specific proofs of consistency depend on which property we


suppose T has when  = 0 .

9
1.3 An Abstract Example
The intuition underlying the analog principle method and the
role of the regularity conditions is illustrated in the simple
(non-stochastic) example below.

• Assume an abstract model P(\> [>  0 ) which leads to a


criterion function T() which has the property

S := T() is maximized at  = 0 (in population).

• Construct a sample analog of the criterion function TW


such that
(; 5 X) TW () $ T.

10
• Select ˆW to maximize TW () for each W . Then, if con-
vergence is ‘OK’ (we get that under the regularity con-
ditions), we have convergence in the sample to the max-
imizer in the population; i.e. we have ˆ $  as W $ 4.

11
Now suppose X = {1> 2> 3} is a finite set, so that ˆ assumes
only one of three values. Then under regularity condition 1
(q.v. §1.2), T() is maximized at one of these.
Further, by construction we have
¡ ¢
; 5 X = {1> 2> 3} TW () $ T().

Note that the rule picks


ˆW = argmax TW ().

This estimator must work well (i.e. be consistent for  for ‘big
enough’ W , because TW $ T for each . Why? We loosely
sketch a proof of the consistency of ˆ for  below.

12
Say 0 = 2 so that T() is maximized at  = 2, T(2) A T(1),
and T(2) A T(3). Now

(; 5 X) TW () $ T().

This implies that as W gets large,

(;W ) ||TW ()  T()|| $ 0.

In other words, as W gets ‘big’ we get TW () arbitrarily close


to T().

13
Now suppose that even for very large W , TW () is not maxi-
mized at 2 but say at 1. Then we have

(;W ) TW (1)  TW (2) A 0.

Then under regularity condition 2 (q.v. §1.2) this would imply


T(1)  T(2) A 0, a contradiction.
Hence the estimator ˆW $ .

Here principle S is an example of the extremum principle.


This principle involves either maximization or minimization
of the criterion function. Examples include the maximum like-
lihood (maximization) and nonlinear least squares (minimiza-
tion) methods.

14
2 Overview of Convergence Concepts
for Non-stochastic Functions
Most estimators involve forming functions using the data avail-
able. These functions of data can be viewed as sequences of
functions, with the index being the number of data points avail-
able.

With su!cient data (large samples), these functions of sample


data converge ‘nicely’ (assuming certain conditions), ensuring
that the estimators we form have good properties, such as ‘con-
sistency ’ and ‘asymptotic normality’.

15
In this section, we examine certain basic concepts about con-
vergence of sequences of functions. The key idea here is that
of uniform convergence; we shall broadly motivate why this
notion of convergence is required for us.

For simplicity, we look at non-stochastic functions and non-


stochastic convergence notions. Broadly construed, the same
ideas also apply to stochastic functions–analogous conver-
gence concepts are applicable in the stochastic case.

16
2.1 Pointwise Convergence
Let Iq : V $ U be a sequence © of ªreal valued functions. For
"
each { 5 V form a sequence Iq ({) q=1 . Let E 5 V be the set
of points for which Iq ({) converges.

(;{ 5 E) I ({) := lim Iq ({)


q<"
© ª"
We then call I ({) a ‘limit function’, and say ‘ Iq ({) q=1 con-
verges pointwise to I ({) on E’.

Nota bene, pointwise convergence is not enough to guarantee


that if Iq has a property (continuity, dierentiability, etc.) that
that property is inherited by I .

17
Inheritance requires uniform convergence. Usually, this is a
su!cient condition for the properties of Iq being shared by the
limit function. Some of the properties we will be interested in
are summarized below:

• Does ‘(;q) Iq ({) is continuous’ =, ‘I ({) is continuous’?

• Does ‘(;q) lim Iq ({) = Iq ({0 )’ =, ‘ lim I ({) = I ({0 )’?


{<{0 {<{0
I.e. does lim lim Iq ({) = lim lim Iq ({)?
{<{0 q<" q<" {<{0
R R R
• Does lim Iq ({) = lim Iq ({) = I ({) (where I ({)
q<" q<"
is the limit function)?

The answer to all three questions is: No, not in general. The
pointwise convergence of Iq ({) to I ({) is generally not su!-

18
cient to guarantee that these properties hold. This is demon-
strated by the examples which follow.

19
Example 1
Consider Iq ({) = {q (0  {  1).
Here, Iq ({) is continuous for every q, but the limit function
is: (
0 if 0  { ? 1
I ({) =
1 if 0  { = 1
This is discontinuous at { = 1.

20
Example 2
Consider
{
Iq ({) = ({ 5 R).
{+q

Then the limit function is I ({) = lim Iq ({) = 0 for each fixed
q<"
{. This implies that lim lim Iq ({) = 0.
{<" q<"
But we have lim Iq ({) = 1 for every fixed q. This implies that
{<"
lim lim Iq ({) = 1 6= lim lim Iq ({) = 0.
q<" {<" {<" q<"

21
Example 3
Consider Iq ({) = q{(1  {2 )q (0  {  1). Then the limit
function is2
I ({) = lim Iq ({) = 0
q<"
R1
for each fixed {. This implies that 0 I ({) = 0. But we also
get Z 1
q 1
lim Iq ({) = lim = .
q<" 0 q<" 2q + 2 2

2
See Rudin, Chapter 3 for concepts relating to convergence of se-
quences.

22
2.2 Uniform Convergence
A sequence of functions {Iq } converges uniformly to I on E
if

(;% A 0)(<Q)(;q A Q)(;{ 5 E) |Iq ({)  I ({)| ? %

where Q depends on % but not on {, i.e.,

(;{ 5 E) I ({)  % ? Iq ({) ? I ({) + %.

Intuitively, for any { 5 E as q gets large enough, Iq ({) lies in


a band of uniform thickness around the limit function I ({).

23
Note that the convergence displayed in the examples in §2.1
did not satisfy the notion of ‘uniform’ convergence.

We have theorems that ensure that the properties of inheri-


tance referred to in §2.1 are satisfied when convergence is uni-
form.3

3
See Rudin, Chapter 7. Theorems 7.12, 7.11, and 7.16 respectively
ensure that the three properties listed in §2.1 hold.

24
3 Applications of the Analog
Principle

25
3.1 Moment Principle
In this case, the criterion function T is an equation connecting
population moments and 0 .
¡ ¢
T = T population moments of [|> {]>  = 0 at  = 0 . (1)

Thus, the property here is not maximization or minimization


(as in the extremum principle), but setting some function of
the population moments (generally) to zero at  = 0 , so that
solving equation 1 we get the estimator:
¡ ¢
ˆW =  sample moments [y,x]

26
When forming the estimator ‘analog’ in the sample, we can
consider two cases.

3.1.1 Case A
Here we solve for 0 in the population equation, obtaining
¡ ¢
0 = 0 population moments [y,x] .

From this equation we construct the simple analog


¡ ¢
ˆ = ˆ population moments [y,x] .

ˆ s
Given regularity condition 1 (q.v. §1.2), we get W $ 0 . Note
that we do not need regularity condition 2.

27
Example 4 (OLS) 1. The model.

\w = [w  0 + Xw E(Xw ) = 0
Var(Xw ) =  2x E([w0 Xw ) = 0
X X
0 0
E([w [w ) = positive definite E([w \w ) = |
{{ {

2. Moment principle (criterion function). In the population

E([w0 Xw ) =0
0 0 0
or [
Pw |w =[
Pw [w  0 + [w Xw
,T: = {{  0 + 0
{| P31 P
, 0 = {{ {| .

28
3. Analog in sample.
μP ¶31 μP ¶
[w0 [w [w \w
ˆ W =
W W
¡ P ¢31 P
Now, r.h.s. $ ˆ s
{{ {| . Thus,  $  0 .

29
3.1.2 Case B
Here we form the criterion function, form the sample analog
for the criterion function, and then solve for the estimator.

We require condition 1 (q.v. §1.2) and uniform convergence


(condition 2) to get consistency of the estimator – i.e. for
ˆ $s
0 .

Example 5 (OLS – another analogy)

1. The model. As in §3.1.1.


2. Moment principle (criterion function). In the population
T : E([w0 X w) = 0.

30
3. Analog in sample. Here we define an ample analog of X:
b = |  {ˆ
X
This mimics X in the criterion function, so that we can
form the sample analog of the criterion function by sub-
b for X in the expression for T above:
stituting X
1X ˆ =0
TW = {w (\w  [w )
W

4. The estimator. Here we pick ˆ by solving from TW to


arrive at the relation
X μP ¶
1 [w \w ˆ
[w \w = .
W W

31
Remarks.

• Recall that in Case A (q.v. §3.1.1) we did not form TW ,


but first solved for 0 and then formed ˆ directly, skipping
step 3 of Case B.

• Observe that when [w = 1,


P
\w
ˆ =
W
is the mean.

• OLS can also be interpreted as an application of the ex-


tremum principle (s.v. §3.2).

32
3.2 Extremum Principle
We saw in §1.3 that under the extremum principle, property P
provides that T() achieves a maximum or minimum at  = 0
in the population.

This principle underlies the nonlinear least squares (NLS) and


maximum likelihood estimation (MLE) methods. As their
names suggest, MLE choose T to be a maximum while NLS
choose S to be the minimum.

33
The OLS estimator can be seen as a special case of the NLS
estimators, and can be viewed as an extremum estimator.

In this section we analyze OLS and NLS as extremum estima-


tors. MLE is examined in more detail in §4.

34
Example 1 (OLS as an extremum principle)
1. The model. We assume the true model
\w = [w  0 +¡XW ¢
=, \W = [w  + [w ( 0  ) + Xw
=, (\w  [w )0 (\w  [w ) = ( 0  )0 [w0 [w (   0 )
+ 2Xw0 [w (   0 ) + Xw0 Xw .

2. Criterion function.
¡ 0
¢
T = E (\w  [w ) (\w  [w )
From model assumptions, we have E([w0 Xw ) = 0. Thus
X
0
T = (   0 ) (   0 ) +  2x .
{{

35
So T is minimized (with respect to ) at  =  0 .
Here, then, T is an example of the extremum principle.
It is minimized when  =  0 (the true parameter vector).

3. Analog in sample. A sample analog of the criterion func-


tion is constructed as

1 XW
TW = (\w  [w )0 (\w  [w0 ).
W w=1

36
We can show that this analog satisfies the key requirement
of the analog principle:

1 X W
plim TW = plim (\w  [w )0 (\w  [w0 )
W w=1
X
0
= (   0 ) (   0 ) +  2X = T
{{

(Assuming conditions for application of some LLN are


satisfied; q.v. Part I of these lectures.)

4. The estimator. We pick ˆ to minimize TW . Under stan-


dard regularity conditions, we can show that we get con-
tradiction unless ˆ $  0 .

37
Example 2 (NLS as an extremum principle)

1. The model. We assume that the following model holds in


the population:

\w = j([w ; 0 ) +¡Xw (non-linear model)¢


=, \W = j([w ; ) + j([w ; 0 )  j([w ; ) + Xw .

Assume (Xw > [w > \w ) i.i.d. Then Xw B


B [w implies that

B j([w ; ).
(;) Xw B

38
2. Criterion function. Choose the criterion function as
¡ ¢2 ¡ ¢2
T = E \w  j([w ; ) = E j([w ; 0 )  j([w ; ) +  2X .

Then T is minimized at  = 0 (a true parameter value).


If  = 0 is the only such value, the model is identified
with respect to the T criterion – regularity condition 1
(q.v. §1.2) is satisfied.

3. Analog in sample.

1 XW
¡ ¢2
TW () := \w  j([w ; )
W w=1

As in the OLS case, we can show that plim TW = T.

39
4. The estimator. We construct the NLS estimator as
ˆW = argmin TW ().

We thus choose ˆ to minimize TW (). Reductio ad ab-


surdum verifies that

TW (0 ) $ T(0 )

and
(; 5 X) TW () $ T() =, ˆW $ .

40
Remark. The NLS estimator could also be derived as a moment
estimator, just as in the OLS example (q.v. §3.1).
1. The model. Same as the non-linear model above. We
have Xw B B j([w ; ).
¡ ¢
2. Criterion function. T = E Xw · j([w ; ) = 0. Note that
this is only one implication of B
B. We may now write

¡ w ; ) = Xw
\w  j([ ¢
=, T = E \w  j([w ; ) · j([w ; ) = 0

3. Analog in sample.

1 XW
¡ ¢
TW := \w  j([w ; ) · j([w ; )
W w=1

41
4. The estimator. Find  which sets TW = 0 (or as close to
zero as possible).

4 Maximum Likelihood
The maximum likelihood estimation (MLE) method is an ex-
ample of the extremum principle. In this section, we look at the
ideas underlying MLE and examine the regularity conditions
and convergence notions in more detail for this estimator.

42
4.1 The Model
Suppose that the joint density of data is
i (|w > {w ; 0 ) · i(|w | {w ; 0 ) · i ({w ).
Assume that {w is ‘exogenous’ – i.e. the density of {w is unin-
formative about 0 . Also assume random sampling. We arrive
at the likelihood function
Y
W
L= i(|w > {w ; 0 ).
w=1

Taking (|w > {w ) as data, L becomes a function of 0 . The log


likelihood function is
X
W X
W X
W
ln L = ln i(|w > {w ; ) = ln i(|w | {w ; ) + ln i ({w ).
w=1 w=1 w=1

43
4.2 Criterion Function
In the population define the criterion function as
¡ ¢
T = E0 ln i (|w > {w ; )
Z
¡ ¢
= ln i(|w > {w ; ) i (|w > {w ; 0 ) d|w d{w .

(We assume this integral exists.)


We pick the  that maximizes L. Note that this is an extremum
principle application of the analogy principle.

44
Claim. The criterion function is maximized at  = 0 .
Proof.
μ ¶
i (|w > {w ; )
E0 = 1 because
i(|w > {w ; 0 )
Z
i(|w > {w ; )
i (|w > {w ; 0 ) d|w d{w = 1.
i (|w > {w ; 0 )

45
Applying Jensen’s inequality, concavity of the ln function im-
plies that
¡ ¢
E ln({)  ln E({)
μ μ ¶¶
i(|w > {w ; )
=, E0 ln 0
i (|w > {w ; 0 )
¡ ¢ ¡ ¢
=, (;) E0 ln i (|w > {w ; )  E0 ln i (|w > {w ; 0 ) .

We get global identification in the population if the inequality


is strict for all  6= .

46
4.3 Analog in Sample
Construct a sample analog of the criterion function TW as

1 XW
TW := ln i(|w > {w ; ).
W w=1

4.4 The Estimator and its Properties


We form the estimator as
ˆW :=  5 X argmax Tw

(we assume this exists).

47
Local form of the principle. In the local form, we use FOC and
SOC to arrive at  5 X argmax Tw . Recall that we have the
criterion function
Z
¡ ¢
T() = ln i(|; ) i (|; 0 ) d| = E0 ln i(|; ) .

Maximization of the criterion function yields first and second


order conditions:
Z
C ln i(|> )
FOC: i (|; 0 ) d| = 0
C
Z 2
C ln i(|; )
SOC: 0 i(|; 0 ) negative definite
CC

48
Accordingly, we require for the sample analog:

1 XW
C ln i(|w ; )
=0
W w=1 C

1 X C 2 ln i
W
negative definite
W w=1 CC0

For ‘local identification’, we require that the second order con-


ditions be satisfied locally around the point solving the FOC.
For ‘global’ identification, we need SOC to hold for every  5
X.

49
Either way (e.g. directly by a grid search or using the FO/SOCs),
we have the same basic idea. For each W , we pick ˆW such that

(; 5 X) TW (ˆW ) A TW ().

Now if TW (ˆW ) $ T(lim ˆW ) (uniform convergence), we get the


contradiction
T(ˆW ) A T(0 ) –
assumed to be a maximum value – unless plim ˆW = 0 .
To be more precise, we must check whether
uniformly
TW $ T (almost surely).

It remains to cover certain concepts and definitions for random


functions.

50
5 Some Concepts and Definitions for
Random (Stochastic) Functions
In §5.1 we define random functions and examine some funda-
mental properties of such functions. In §5.2 we define conver-
gence concepts for sequences of random functions.

51
5.1 Random Functions and Some Properties
Definition 2 (Random Function)
Let (l> A> S ) be a probability space and let X 5 Rn . A real
function *() = *(> $) on X × l is called a random function
on X if
1
© ª
(;w 5 R )(; 5 X) $ 5 l : *(> $) ? w 5 A.

We can then assign a probability to the event: *(> $) ? w.

52
Proposition. If *(> {) is a continuous real-valued function on
X × Uq where X is compact, then

j({)  sup *(> {) and k({)  inf *(> {)


MX MX

are continuous functions.


Proof. See Stokey, Lucas, and Prescott, Chapter 3 for defini-
tions of sup and inf , and for the proof of this proposition (q.v.
MX MX
Theorem 3.6).

53
Proposition. If for almost all values of { 5 [, j({> ) is con-
tinuous with respect to  at the point 0 , and if for all  in a
neighborhood of 0 we have
¯ ¯
¯j({> )¯ ? J1 ({) ? 4,

then Z Z
lim j({> ) dI ({) = j({> 0 ) dI ({).
<0

I.e. ¡ ¢ ¡ ¢
lim E j({> ) = E j({> 0 ) .
<0

Proof. This is a version of a ‘dominated convergence theorem’.


See inter alios Royden, Chapter 4.

54
Proposition. If for almost all values of { 5 [ and for a fixed
value of 
Cj({> )
(a) exists (in a neighborhood of ), and
¯ C ¯
¯ j({>  + k)  j({> ) ¯
(b) ¯ ¯ ? J2 ({),
¯ k ¯

for 0 ? |k| ? k0 , k independent of {, then


Z Z
C Cj({> )
j({> ) dI ({) = dI ({).
C C
I.e.  ¸
C ¡ ¢ Cj({> )
E j({> ) = E .
C C

55
5.2 Convergence Concepts for Random Func-
tions
In Part I (asymptotic theory) we defined convergence concepts
for random variables. Here we define analogous concepts for
random functions.

Definition 3 (Almost Sure Convergence)


Let I () and Iq () be random functions on X  Un for each
 5 X. Then Iq () almost surely converges to I () as q $ 4
if
© ª
(;% A 0) S $ : lim |Iq (> $)  I (> $)| ? % = 1;
q<"

i.e. if for every fixed  the set V  [ such that |Iq (> $) 
I (> $)|  %, q  q0 (> %), has no probability.

56
V =  5 X ^ V may have a non-negligible probability even
though any one set has negligible probability. We avoid this
by the following definition.

Definition 4 (Almost Sure Uniform Convergence)


Iq () $ I () almost surely uniformly in  if

sup |Iq ()  I ()| $ 0


MX

almost surely as q $ 4. I.e., if


© ª
(;% A 0)(;$ 5 l) S $ : lim sup |Iq (> $)I (> $)|  % = 1.
q<" MX

In this case, the negligible set is not indexed by .

57
Definition 5 (Convergence in Probability) Let Iq () and
I () be random functions on X. Then Iq () $ I () in prob-
ability uniformly in  on X if
© ª
lim S sup |Iq ()  I ()| A % = 0.
q<" MX

58
Theorem 1 (Strong Uniform Law of Large Numbers)
Let {{q } be a sequence of random n × 1 i.i.d. vectors. Let
I ({> ) be a continuous real function on Un . X is compact (it
is closed and bounded, and thus has a finite subcover). Define

#(d) = sup sup |I ({> )|.


||{||?d MX

Let J({) be the distribution of {. Assume E[#({)] ? 4. Then,

XW Z
1 a. s. unif.
I ({m > ) $ I ({> ) dJ({).
W m=1

59
This is a type of LLN, and could be modified for the non-i.i.d.
case. In that case, each {m has its own distribution Im , and we
would require that

1 XW
g
(a) Im $ J and
W m=1

1X ¡ ¢1+
W
(b) W sup E |#({m )| ? 4.
W m=1

Then
XW Z
1 a. s. unif.
I ({m > ) $ I ({> ) dJ({).
W m=1

Note that we need a bound in either case on E[|#({m )|].

60
References
1. Amemiya, Advanced Econometrics, 1985, Chapter 3.

2. Greenberg and Webster, Advanced Econometrics, 1991,


Chapter 1.

3. Newey and McFadden, ‘Large Sample Estimation and


Hypothesis Testing’ in Handbook of Econometrics, 1994,
Chapter 36, Volume IV.

4. Royden, Real Analysis, 1968, Chapter 4.

5. Rudin, Principles of Mathematical Analysis, 1976, Chap-


ters 3 and 7.

61
6. Stokey, Lucas, and Prescott, Recursive Methods in Eco-
nomic Dynamics, 1989, Chapter 3.

62

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy