Turbulence: Lecture Note 2009 Fall Parallel Computing Lab. Hanyang Univ
Turbulence: Lecture Note 2009 Fall Parallel Computing Lab. Hanyang Univ
TURBULENCE
Lecture Note
2009 Fall
http://vortex.hanyang.ac.kr
• It is important
p to be able to tell how a r.v. is distributed about the mean ((ensemble average)
g )
Define : Variance of r.v. u
var{u} ≡ σ u2 ≡ (u − u )
2
σ u : standard deviation of u
variance is also called 2nd central moment
Suppose 2 r.v.’s are identically distributed, these must have the same variance
C d
Can define
fi hihigher
h moments
t
N
1
nth moment : u ≡ Nlim
n
→∞ N
∑u
j =1
n
j
n=1 : mean
n=2 : mean square
central nth moment (first substract mean value)
N
1
(u − u ) n ≡ lim
N →∞ N
∑ (u
j =1
j − u )n
u 2 = [u + (u − u )]
2
[
= u 2 + 2u (u − u ) + (u − u ) 2 ]
cf. Easyy to see from def. of ensemble ave. that arithmatic operations
p and averaging
g g oparations
p commute
N N
1 1
e.g. u + v = lim
N →∞ N
∑ ui + lim
i =1
N →∞ N
∑v
i =1
i
N
1
= lim
N →∞ N
∑ (u
i =1
i + vi ) = (u + v)
⇒ u 2 = u 2 + 2u (u − u ) + (u − u ) 2
= u 2 + 2u ⋅ 0 + var{u}
(mean square) = (square of man) + variance
or var{u} = u 2 − u 2
windows
level
Δu
lim H (u, Δu ) = 0
Δu →0
H (u , Δu )
But Δlim = B(u ) : Probability density function
u →0Δu
( d f)
(p.d.f)
N →∞
← a double limit
Δu → 0
li iti
limiting curve off hi
histogram
t
• Properties of PDF
① B(u) ≥ 0
B(c)
c
c dc
c
③ p{u ≤ c} = ∫ B(u )du ≡ F (c) : Probability Distribution Function
−∞
← probability
b bilit average and
d ensemble
bl average are th
the same
nth moment
∞
u n = ∫ u n B(u )du
−∞
∞
variance ≡ var{u} = ∫−∞
(u − u ) n B(u )du ≥ 0 (cf. B (u ) ≥ 0 )
∞
third central moment = ∫−∞
(u − u ) 3 B (u )du
1 1
B (u ) = [ B(u ) + B (−u )] + [ B(u ) − B(−u )]
2 2
even odd
- variance can be zero only when B (u ) = δ (u ) (steady signal or all values exactly same)
- third
hi d centrall moment iis zero if B(
B(u)) iis even, can b
be non-zero only
l if B(
B(u)) h
has an odd
dd part.
(u − u ) 3
Skewness ≡ S ≡ 3
[var{u}] 2 ← nondimensionalize
Contribution of (u − u ) 3 is
bigger when u < 0
large K
2π σ
• Joint
J i t moments
t
Consider two r.v.`s u&v
u 2 , v2 - single
(u − u ) 2 - variance u
uv 2 , u 2 v
joint •(u , v )
1 1
moments
(u − u )(v − v ) ← (cross - ) correlation
(cross - ) covariance
u
u1
v
v1
B (u , v) = lim H (u , v, Δu , Δv)
N →∞
Δu →0
Δv →0
Properties of JPDF
① B (u , v) ≥ 0
∞
② ∫ ∫ B(u, v)dudv = 1
−∞
∂ 2 F (u, v)
B (u, v) =
∂u∂v
Fu = F (u, ∞)
Fv = F (∞, v)
B (u , v )
u1
u
v1
Suppose we want the statistics one var. given, a particular value of the other
- conditional prob. Say the particular value of v is v1
( profiles )
B(u , v1 ) = B(u / v = v1 )
u = ∫ ∫ uB(u, v)dudv = ∫ du ⋅ u ⎡ ∫ dvB(u, v)⎤ = ∫ uBu (u )du
∞ ∞ ∞ ∞
−∞ −∞ ⎢⎣ −∞ ⎥⎦ −∞
Suppose u ′v′ = 0
then we say u & v (or u ′ & v′) are uncorrelated
0 0
Note uv = (u + u )(v + v) = u v + u ′v + v′u + u ′v′
uncorrelated : uv = u v
uncorrelated : ρ uv ≡ 0
so must always ρ uv ≤ 1
perfectly anti-correlated : ρ uv = −1
Statistical Independence
p
: occurrence of u in no way affects the probability of occurrence of v, and conversely
⇒ B(u, v) = Bu (u ) Bv (v)
=0 =0
Assume two r.v.’s u & v with u′v′ ≠ 0 . These are clearly not statistical independence
No combination of these two r.v.’s can be statistical independence.
However, we can find a combination which has zero correlation.
e.g.)) x = u + v, y = u − v
⇒ x′ = u ′ + v′, y′ = u ′ − v′
⇒ x′y′ = (u ′ + v′)(u ′ − v′) = u ′2 + u ′v′ − u ′v′ − v′2
= u ′ 2 − v′ 2
x & y are not statistical independence since u & v are not. If u′2 = v′2 , then x′y′ = 0 and
there is no correlation in spite of statistical independence.
statistical independence
cf.
f Bi
Bivariate
i t NNormal(Gaussian)
l(G i ) Di
Distribution
t ib ti
Suppose u & v are normally distributed r.v.’s with standard deviation given by σ u & σ v ,
and with correlation coeff.
(u − u )(v − v )
ρuv =
σ uσ v
Either r.v. taken separately has a Gaussian marginal distribution.
( u−u ) 2
−
1 2σ u 2
e.g.) Bu (u ) = e
2π σ u
Estimation of statistical p
properties
p from a finite number of realization
- never have infinite number of realizations
- so how good are our estimatiors based on a finite number
M
1
∑u
∞
lim Nk = uN or u N = ∫ u N Bu N (u N )du N
M →∞ M −∞
k =1
sampling P.D.F
Bias : Estimator is unbiased if the average of estimator yields the true average
no systematic error
N
converge
g to correct value
1 1 1
uN =
N
∑u
i =1
i =
N
∑u i =
N
⋅ Nu = u
(
var{u N } ≡ u N − u N )
2
Does var{u N } → 0 as N →∞ ?
i.e. variability of estimator is equal to variability of r.v. itself divided by the number of independent
realizations
li ti
1 σu
error : ε =
N u
1
N
u N
Bu N
How many flips are required to measure expected value of ½ to within 1% error?
1
u=
2
var{u} = (u − u ) 2 =
1 ⎧0
since u=⎨
4 ⎩1
var{u N }
2
= 0.01 ⇒ ε 2 = 10 − 4
u
1
1 ⎛σu ⎞
2
1
N = 2 ⎜ ⎟ = − 4 4 2 = 10 4
ε ⎝ u ⎠ 10 ⎛ 1 ⎞
⎜ ⎟
⎝ 2⎠
Stationary
y Random Process (stochastic
( Process))
u(t)
For stationary random process, its PDF and moments are time-independent (independent of the origin of time).
This will only approximate a real process, since stationary random process must go on forever.
Ensemble average
g
1 T
u = lim
T →∞ T ∫
0
u (t ) dt
1 T 1 T
uT =
T ∫
0
u (t ) dt =
T ∫
0
u (t ) dt
u (t ) = u = const
1 T
⇒ uT = u ∫ dt = u ← uT is unbiased estimator
T 0
Does var{ut } → 0 as T → ∞ ? ( uT → u as T → ∞ ?)
where τ = t '−t
This can not depend on time itself (i.e. t) since the process is assumed stationary.
Why does C (τ ) → 0 as τ → ∞ ?
c(τ) u(t)
τ
τ t
c(τ)
2
u/
I τ
∞
I ⋅ c(0) = ∫ c(τ )dτ
0
Autocorrelation coefficient :
c(τ ) u / (t )u / (t + τ )
ρ (τ ) = =
c(0) u/
2
∞
I = ∫ ρ (τ )dτ
0
ρ(τ)
Ieff 2
∞
or Ieff1 = ∫ ρ(τ )dτ
0
1 T T
2 ∫0 ∫0
Back to var{uT } , var{uT } = c(t − t / )dt / dt
T
var{u} T T
=
T 2 ∫ 0 ∫0
ρ (t / − t )dtdt /
2 var{u} T ⎛ τ⎞
After a partial integration , var{uT } =
T ∫0
ρ (τ )⎜1 − ⎟dτ
⎝ T⎠
⎛ τ⎞
Since ρ (τ )⎜1 − ⎟ → ρ (τ ) as T → ∞
⎝ T⎠
2 var{u} T 2I
var{uT } ≅
T ∫0
ρ (τ )dτ =
T
var{u} (for T >>I )
var((uT ) 2τ var{{u}
cf.
f variability
i bilit ε2 ≡ ≅
u2 T u2
Therefore uT → u as T → ∞
Hence for stationary random processes we don’t need to perform many experiments to determine statistics.
Compare 1 var{u}
ε2 = : N independent realization
N u2
2τ var{u}
ε2 = : I integral scale, T record time
N u2
Obviously the effective number of independent realization is
T
N eff =
2I
The segments of our time record of two integral scales in length contribute to the average
as if they were statistically independent.
u(t)
t
2I
D
UE
l
δ ≅ 0.1x
Spatial integral scale
l ≅ 0.04x
0 04x
l
I = where U c = local convection velocity
Uc
U c ≅ 0.6U E U E = 20 m/s D=0
D=0.1m
1m
u/ = 0.15UE σ
≅ 0.25
Uc = 0.6UE u
2τ ⎛ σ ⎞
2
ε = ⎜ u⎟
2
T ⎝ u ⎠
2τ ⎛ σ ⎞
2
2(0.001)
T≥ 2⎜ u⎟ = (0.25) 2 = 1.25 s
ε ⎝u ⎠ (0.01) 2
e.g. plume
l ≅ 0.04 x
U c ≅ 0.5 m / s
The integral scale, because of its definition, is mostly concerned with the scales containing the
most of energy (variance ⇒ energy). So we could identify the scales which are much larger or
smaller than those containing the energy.
Taylor
y microscale
ρ(τ)
τ2
ρ (τ ) ≅ 1 + ρ ' ' (0)
2!
τ2
= 1− 2
λτ ← oscillating parabola
2
λτ2 = −
ρ ' ' (0)
What is Taylor
y microscale p
physically?
y y
du ′(t ) du ′(t ′) d2
= u ′(t )u ′(t ′)
dt dt ′ dtdt ′
d2
= u ′2 ρ (t ′ − t )
dtdt ′
change variables,
ξ = t ′ + t ⎫⎪ 2t ′ = ξ + τ ⎞ t ′ = t ′(ξ ,τ )
⎬ ⎟
τ = t ′ − t ⎪⎭ 2t = ξ + τ ⎟⎠ t = t (ξ ,τ )
∂ ∂ ∂ξ ∂ ∂τ ∂ ∂
= + = +
∂t ′ ∂ξ ∂t ′ ∂τ ∂t ′ ∂ξ ∂τ
∂ ∂ ∂ξ ∂ ∂τ ∂ ∂
= + = −
∂t ∂ξ ∂t ∂τ ∂t ∂ξ ∂τ
∂2 ∂2 ∂2
= 2− 2
∂t∂t ′ ∂ξ ∂τ
∂2 ∂2 ∂2 1 du ′(t ) du ′(t ′)
ρ (t ′ − t ) = ( 2 − 2 ) ρ (τ ) = 2
∂t∂t ′ ∂ξ ∂τ u ′ dt dt ′
1 ⎛ du′ ⎞
2
2
hence ρ ′′(0) = − 2 ⎜ ⎟ =− 2
u′ ⎝ dt ⎠ λτ
2u′2
∴ λτ2 =
⎛ du′ ⎞
2
⎜ ⎟
⎝ dt ⎠
Is λτ a time scale for the derivative of the signal?
velocity scale characterizing change in U
du Δu U s
~ ~ = time scale characterizing change in t
dt Δt τ s over which change in U occure
Ans. No because urms ~ u 2 is not the correct velocity scale for change in the derivative.
λτ alone tells you nothing about the time scale of the derivative of the signal
signal.
However, when used in conjunetion with the variance, it measures the mean square fluctuating derivative
⎛ du ′ ⎞ 2u ′2
2
⎜ ⎟ = 2
⎝ dt ⎠ λτ
λτ is approximately (exact for Gaussian Proces)
the distance between zero-crossing of the signal ( average distance between zero-crossings )
Spectral
p Analysis
y
The optimal way to represent a stationary random process is by harmonic functions (cosine and sines)
The operation for doing this is called Fourier analysis
Define the Fourier transform of u(t) as
∞
F {u (t )} = u~ ( w) ≡
1
2π ∫
−∞
e −iwt u (t )dt
For F
F.T.
T to exist
exist, the integral must exist
Consider
u (t ) = 1
1 ∞ 1 1 −iwt ∞
F {1} =
2π ∫
−∞
e −iwt ⋅1dt =
2π − iw
e ]−∞ ← nott exist
it
Appropriate function : one which goes to zero at infinity sufficient rapidly that its F
F.T.
T exists
T − ω 2T
t2 t2 2 2
1− ∞ −
∫
− i ωt
F {1
e }= 2T 2
e e 2T 2
dt = e
23 2π −∞
2π
14243
g (t ) g~ (ω )
T T
g~T (ω)
gT (t)
1
T T
t ω
T − ω 2T
t2 2 2
1 ∞ −
∫
− i ωt
F {1} = lim e e 2T 2
dt = lim e = δ (ω )
g . f T →∞ 2π −∞ T →∞ 2π
We replace the function 1 with a generalized function which is the limit of a series of functions
where F.T. exists.
The F.T. of 1 is then defined to be the limit of the F.T. of the series. This limit is said to be the F.T.
of 1 in the sense of generalized function
function.
T − ω 2T
2 2
δ (ω ) = lim e
T →∞ 2π
ω 2T 2 t2
∞ T − −
F {δ (ω )} = Tlim
−1
→∞ ∫− ∞
e iωt
⋅ e 2
dω = lim e 2T
=1
g. f 2π T →∞
1 ∞
u~ =
g . f 2π ∫ −∞
e −iωt u (t )dt
u(t) F u~ (ω )
t ω
F -1
Nothing obvious gained from transformation into frequency domain except we have decomposed
th stationary
the t ti ti
time series
i iinto
t it
its since
i & cosine
i componentt
Autocorrelation
∞ ∞
u (t )u (t ′) = ∫ e iwt u~ ( w)dw ⋅ ∫ e iw′t ′u~ ( w′)dw′
−∞ −∞
∞ ∞
u (t )u (t ′) = ∫ e −iwt u~ * ( w)dw ⋅ ∫ e iw′t ′u~ ( w′)dw′
*
−∞ −∞
Therefore
∞
S (ω ) : (power) spectrum
deterministic function defined by
Properties
p of the spectrum
p
① F.T. of Autocorrelation
1 ∞
S (ω ) =
2π ∫ −∞
e −iωt c(τ )dτ (≥ 0)
1 ∞ 1 ∞ u2
S (0) =
2π ∫
−∞
c(τ )dτ =
π ∫0
c(τ )dτ =
π
⋅I
R
Recall
ll 1 ∞
u~ ( w) =
g . f 2π ∫
−∞
e −iwt u (t )dt
Let ⎧ ~ 1 T − i ωt
2π ∫−T
⎪⎪ uT (ω ) = e u (t )dt : finite time estimator for u~ (ω )
⎨ ~
⎪ S (ω ) = 2π ⋅ uT (ω )
2
⎪⎩ T : spectral estimator
2T
1 2T ⎡ τ ⎤
ST (ω ) =
2π ∫ − 2T
e −iωt c(τ ) ⎢1 − ⎥dτ
⎣ 2T ⎦
⇒ lim S T (ω ) = S (ω ) ← the estimator is unbiased
T →∞
Cross correlation
u (t )v(t ′) = Cuv (τ ) , τ = t ′ − t
∫∫e u~ * ( w) v~ ( w′)dwdw′
− i ( w′t ′ − wt )
C uv (t ′ − t ) = u (t ) v (t ′) =
−∞
u~ * ( w) v~ ( w′) = S uv ( w)δ ( w′ − w)
cross spectrum
b f
before C (τ ) = u (t )u (t + τ ) = F -1{S (ω )}
S (ω ) = F {C (τ )}
∞
u 2 = ∫ S (ω )dω
−∞