Digital Signal Processing (DSP) with Python Programming
()
About this ebook
Related to Digital Signal Processing (DSP) with Python Programming
Related ebooks
Random Data: Analysis and Measurement Procedures Rating: 4 out of 5 stars4/5Optimal Filtering Rating: 4 out of 5 stars4/5A Practical Approach for Machine Learning and Deep Learning Algorithms: Tools and Techniques Using MATLAB and Python Rating: 0 out of 5 stars0 ratingsTime Series with Python: How to Implement Time Series Analysis and Forecasting Using Python Rating: 3 out of 5 stars3/5Exercises of Statistical Inference Rating: 0 out of 5 stars0 ratingsQuant Developers' Tools and Techniques: Quant Books, #2 Rating: 0 out of 5 stars0 ratingsLearning Probabilistic Graphical Models in R Rating: 0 out of 5 stars0 ratingsR Machine Learning Essentials Rating: 0 out of 5 stars0 ratingsMachine Learning with Clustering: A Visual Guide for Beginners with Examples in Python Rating: 0 out of 5 stars0 ratingsBayesian Analysis with Python Rating: 4 out of 5 stars4/5Applied Multivariate Analysis: Using Bayesian and Frequentist Methods of Inference, Second Edition Rating: 0 out of 5 stars0 ratingsJulia Cookbook Rating: 0 out of 5 stars0 ratingsNonlinear Filtering and Smoothing: An Introduction to Martingales, Stochastic Integrals and Estimation Rating: 0 out of 5 stars0 ratingsMastering Time Series Analysis and Forecasting with Python Rating: 0 out of 5 stars0 ratingsMarkov Decision Process: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsDAFX: Digital Audio Effects Rating: 4 out of 5 stars4/5Learning jqPlot Rating: 0 out of 5 stars0 ratingsNumerical Methods in Finance and Economics: A MATLAB-Based Introduction Rating: 0 out of 5 stars0 ratingsContemporary Machine Learning Methods: Harnessing Scikit-Learn and TensorFlow Rating: 0 out of 5 stars0 ratingsMATLAB for Beginners: A Gentle Approach - Revised Edition Rating: 0 out of 5 stars0 ratingsLearn Python Programming the Easy and Fun Way Rating: 1 out of 5 stars1/5Machine Learning: Hands-On for Developers and Technical Professionals Rating: 0 out of 5 stars0 ratingsFoundations of Data Intensive Applications: Large Scale Data Analytics under the Hood Rating: 0 out of 5 stars0 ratingsBacktrader Essentials: Building Successful Strategies with Python Rating: 0 out of 5 stars0 ratingsVectors and Their Applications Rating: 0 out of 5 stars0 ratingsMastering MATLAB: A Comprehensive Journey Through Coding and Analysis Rating: 0 out of 5 stars0 ratingsOverview Of Bayesian Approach To Statistical Methods: Software Rating: 0 out of 5 stars0 ratingsExercises of Power, Taylor and Fourier Series Rating: 0 out of 5 stars0 ratingsConnectivity Prediction in Mobile Ad Hoc Networks for Real-Time Control Rating: 5 out of 5 stars5/5
Trending on #Booktok
Icebreaker: A Novel Rating: 4 out of 5 stars4/5It Ends with Us: A Novel Rating: 4 out of 5 stars4/5The Secret History: A Read with Jenna Pick: A Novel Rating: 4 out of 5 stars4/5Powerless Rating: 4 out of 5 stars4/5A Little Life: A Novel Rating: 4 out of 5 stars4/5Pride and Prejudice Rating: 4 out of 5 stars4/5Normal People: A Novel Rating: 4 out of 5 stars4/5The Love Hypothesis Rating: 4 out of 5 stars4/5If We Were Villains: A Novel Rating: 4 out of 5 stars4/5The Summer I Turned Pretty Rating: 4 out of 5 stars4/5Funny Story Rating: 4 out of 5 stars4/5Happy Place Rating: 4 out of 5 stars4/5Once Upon a Broken Heart Rating: 4 out of 5 stars4/5Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Rating: 4 out of 5 stars4/5Seven Stones to Stand or Fall: A Collection of Outlander Fiction Rating: 4 out of 5 stars4/5Better Than the Movies Rating: 4 out of 5 stars4/5Fire & Blood: 300 Years Before A Game of Thrones Rating: 4 out of 5 stars4/5The 48 Laws of Power Rating: 4 out of 5 stars4/5Crime and Punishment Rating: 4 out of 5 stars4/5Beauty and the Beast Rating: 4 out of 5 stars4/5Dune Rating: 4 out of 5 stars4/5Divine Rivals: A Novel Rating: 4 out of 5 stars4/5Rich Dad Poor Dad Rating: 4 out of 5 stars4/5The Lord Of The Rings: One Volume Rating: 5 out of 5 stars5/5The Little Prince: New Translation Version Rating: 5 out of 5 stars5/5Finnegans Wake Rating: 4 out of 5 stars4/5Beach Read Rating: 4 out of 5 stars4/5Milk and Honey: 10th Anniversary Collector's Edition Rating: 4 out of 5 stars4/5
Related categories
Reviews for Digital Signal Processing (DSP) with Python Programming
0 ratings0 reviews
Book preview
Digital Signal Processing (DSP) with Python Programming - Maurice Charbit
A Few Functions of Python®
To get function documentation, use .__doc__, e.g. print(range.__doc__), or help, e.g. help(zeros) or help(’def’), or ?, e.g. range.count?
– def: introduces a function definition
– if, else, elif: an if statement consists of a Boolean expression followed by one or more statements
– for: executes a sequence of statements multiple times
– while: repeats a statement or group of statements while a given condition is true
– 1j or complex: returns complex value, e.g. a=1.3+1j*0.2 or a=complex(1.3,0.2)
Methods:
– type A=array([0,4,12,3]), then type A. and tab, it follows a lot of methods, e.g. the argument of the maximum using A.argmax. For help type, e.g. A.dot?.
Functions:
– int: converts a number or string to an integer
– len: returns the number of items in a container
– range: returns an object that produces a sequence of integers
– type: returns the object type
From numpy:
– abs: returns the absolute value of the argument
– arange: returns evenly spaced values within a given interval
– argwhere: finds the indices of array elements that are non-zero, grouped by element
– array: creates an array
– cos, sin, tan: respectively calculate the cosine, the sine and the tangent
– cosh: calculates the hyperbolic cosine
– cumsum: calculates the cumulative sum of array elements
– diff: calculates the n-th discrete difference along a given axis
– dot: product of two arrays
– exp, log: respectively calculate the exponential, the logarithm
– fft: calculates the fft
– isinf: tests element-wise for positive or negative infinity
– isnan: tests element-wise for nan
– linspace: returns evenly spaced numbers over a specified interval
– loadtxt: loads data from a text file
– matrix: returns a matrix from an array-like object, or from a string of data
– max: returns the maximum of an array or maximum along an axis
– mean, std: respectively return the arithmetic mean and the standard deviation
– min: returns the minimum of an array or maximum along an axis
– nanmean, nanstd: respectively return the arithmetic mean and the standard deviation along a given axis while ignoring NaNs
– nansum: sum of array elements over a given axis, while ignoring NaNs
– ones: returns a new array of given shape and type, filled with ones
– pi: 3.141592653589793
– setdiff1d: returns the sorted, unique values of one array that are not in the other
– size: returns the number of elements along a given axis
– sort: returns a sorted copy of an array
– sqrt: computes the positive square-root of an array
– sum: sum of array elements over a given axis
– zeros: returns a new array of given shape and type, filled with zeroes
From numpy.linalg:
– eig: computes the eigenvalues and right eigenvectors of a square array
– pinv: computes the (Moore–Penrose) pseudo-inverse of a matrix
– inv: computes the (multiplicative) inverse of a matrix
– svd: computes Singular Value Decomposition
From numpy.random:
– rand: draws random samples from a uniform distribution over (0, 1)
– randn: draws random samples from the standard normal
distribution
– randint: draws random integers from ‘low’ (inclusive) to ‘high’ (exclusive)
From scipy:
(for the random distributions, use the methods .pdf, .cdf, .isf, .ppf, etc.)
– norm: Gaussian random distribution
– gamma: gamma random distribution
– f: Fisher random distribution
– t: Student’s random distribution
– chi2: chi-squared random distribution
From scipy.linalg:
– sqrtm: computes matrix square root
From matplotlib.pyplot:
– box, boxplot, clf, figure, hist, legend, plot, show, subplot
– title, txt, xlabel, xlim, xticks, ylabel, ylim, yticks
Datasets:
– statsmodels.api.datasets.co2, statsmodels.api.datasets.nile, statsmodels.api.datasets.star98, statsmodels.api.datasets.heart
– sklearn.datasets.load_boston, sklearn.datasets.load_diabetes
– scipy.misc.ascent
From sympy:
– Symbol, Matrix, diff, Inverse, trace, simplify
1
Useful Maths
1.1. Basic concepts on probability
Without describing in detail the formalism of the probability theory, we simply remind the reader of useful concepts. However, we advise the reader to consult some of the many books with authority on the subject [BIL 12].
In probability theory, we consider a sample space Ω, which is the set of all possible outcomes ω, and a collection of its subsets with a structure of σ-algebra, the elements of which are called the events.
DEFINITION 1.1 (Random variable).– A real random variable X is a (measurable) application from the Ω to :
[1.1] Numbered equation
DEFINITION 1.2 (Discrete random variable).– A random variable X is said to be discrete if it takes its values in a subset of , at the most countable. If {a0, …, an, …}, where n ∈ , denotes this set of values, the probability distribution of X is characterized by the sequence:
[1.2] Numbered equation
representing the probability that X is equal to the element an. These values are such that 0 ≤ pX(n) ≤ 1 and .
This leads us to the probability for the random variable X to belong to the interval ]a, b]. It is given by:
[1.3] Numbered equation
The cumulative distribution function (cdf) of the random variable X is defined, for x ∈ , by:
[1.4] Numbered equation
It is a monotonic increasing function, with FX(−∞) = 0 and FX(+∞) = 1. Its graph is a staircase function, with jumps located at an with amplitude pX(n).
DEFINITION 1.3 (q-quantiles).– The k-th q-quantiles, associated with a given cumulative function F(x), are written as:
[1.5] Numbered equation
where k goes from 1 to q − 1. Therefore, the number of q-quantiles is q − 1.
The q-quantiles are the limits of the partition of the probability range into q intervals of equal probability 1/q. For example, the 2-quantile is the median.
More specifically, we have:
DEFINITION 1.4 (Median).– The median of the random variable X is the value M such that the cumulative function satisfies FX(M) = 1/2.
The following program performs the q-quantiles of the Gaussian distribution1. Each area under the probability density equals 1/q.
# -*- coding: utf-8 -*- Created on Fri Aug 12 09:11:27 2016 ****** gaussianquantiles @author: maurice
from numpy import linspace, arange from scipy.stats import norm from matplotlib import pyplot as plt x = linspace(-3,3,100); y = norm.pdf(x); plt.clf(); plt.plot(x,y) q = 5; Qqi = arange(1,q)/q; quantiles = norm.ppf(Qqi) plt.hold(’on’) for iq in range(q-1): print(’%i-th of the %i-quantiles is %4.3e’%(iq+1,q,quantiles [iq])) plt.plot([quantiles[iq],quantiles[iq]],[0.0,norm.pdf(quantiles [iq])],’:’) plt.hold(’off’);plt.title(’eachareaisequalto%4.2f’%(1.0/q)); plt.show();
DEFINITION 1.5 (Two discrete random variables).– Let {X, Y } be two discrete random variables, with respective sets of values {a0, …, an, …} and {b0, …, bk, …}. The joint probability distribution is characterized by the sequence of positive values:
[1.6] Numbered equation
with 0 ≤ pXY(n, k) ≤ 1 and Numbered equation .
This definition can easily be extended to the case of a finite number of random variables.
PROPERTY 1.1 (Marginal probability distribution).– Let {X, Y } be two discrete random variables with their joint probability distribution pXY (n, k). The respective marginal probability distributions of X and Y are written as:
[1.7] Numbered equation
DEFINITION 1.6 (Continuous random variable).– A random variable is said to be continuous2 if its values belong to and if, for any real numbers a and b, the probability that X belongs to the interval ]a, b] is given by:
[1.8]
Numbered equationwhere pX(x) is a function that must be positive or equal to zero such that . pX(x) is called the probability density function (pdf) of X.
For any x ∈ , the cumulative distribution function (cdf) of the random variable X is defined by:
[1.9] Numbered equation
It is a monotonic increasing function with FX(−∞) = 0 and FX(+∞) = 1. Notice that pX(x) also represents the derivative of FX(x) with respect to x.
DEFINITION 1.7 (Two continuous random variables).– Let {X, Y } be two random variables with possible values in ². Their probability distribution is said to be continuous if, for any domain Δ of ², the probability that the pair (X, Y) belongs to Δ is given by:
[1.10] Numbered equation
where the function pXY (x, y) ≥ 0, and such that:
pXY (x, y) is called the joint probability density function of the pair {X, Y }.
PROPERTY 1.2 (Marginal probability distributions).– Let {X, Y } be two continuous random variables with the joint probability distribution pXY (x, y). The respective marginal probability density functions of X and Y can be written as:
[1.11] Numbered equation
It is also possible to have a mixed situation, where one of the two variables is discrete and the other is continuous. This leads to the following:
DEFINITION 1.8 (Mixed random variables).– Let X be a discrete random variable with possible values {a0, …, an, …} and Y a continuous random variable with possible values in . For any value an, and for any real number pair (a, b), the probability:
[1.12] Numbered equation
where the function pXY (n, y), with n ∈ {0, …, k, …} and y ∈ , is ≥ 0 and verifies .
DEFINITION 1.9 (Two independent random variables).– Two random variables X and Y are said to be independent if and only if their joint probability distribution is the product of the marginal probability distributions. This can be expressed as:
– for two discrete random variables: pXY (n, k) = pX(n) pY (k)
– for two continuous random variables: pXY (x, y) = pX(x) pY (y)
– for two mixed random variables: pXY (n, y) = pX(n) pY (y)
where the marginal probability distributions are obtained using formulae [1.7] and [1.11].
It is worth noting that, knowing pXY (x, y), we can tell whether or not X and Y are independent. To do this, we need to calculate the marginal probability distributions and check that pXY (x, y) = pX(x)pY (y). If that is the case, then X and Y are independent.
The generalization to more than two random variables is given by the following definition.
DEFINITION 1.10 (Independent random variables).– The random variables {X0, …, Xn−1} are jointly independent, if and only if their joint probability distribution is the product of their marginal probability distributions. This can be expressed as
[1.13]
Numbered equationwhere the marginal probability distributions are obtained as integrals with respect to (n − 1) variables, calculated from pX0X1…Xn−1(x0, x1, …, xn−1).
For example, the marginal probability distribution of X0 has the following expression:
In practice, the following result is a simple method for determining whether or not random variables are independent:
PROPERTY 1.3.– If pX0X1…Xn−1(x0, x1, …, xn−1) is a product of n positive functions of the type f0(x0), f1(x1), …, fn−1(xn−1), then the variables are independent.
It should be noted that, if n random variables are independent of one another, it does not necessarily mean that they are jointly independent.
DEFINITION 1.11 (Mathematical expectation).– Let X be a random variable and f(x) a function. The mathematical expectation of f(X) is the deterministic value denoted by {f(X)} and defined as follows:
– for a discrete r.v. by: ,
– for a continuous r.v. by: ,
That can be extended to any number of random variables, e.g. for two random variables {X, Y } and a function f(x, y), the definition is:
– for 2 discrete r.v., by:
– for 2 continuous r.v. by:
provided that all expressions exist.
From [1.3] and [1.8], the probability for X to belong to (a, b) may be seen as the expectation of the indicator function (X ∈ (a, b)).
PROPERTY 1.4.– If {X0, X1, …, Xn−1} are jointly independent, then for any integrable functions f0, f1, …, fn−1:
[1.14] Numbered equation
DEFINITION 1.12 (Characteristic function).– The characteristic function of the probability distribution of the random variables {X0, X1, …, Xn−1} is the function of (u0, …, un−1) ∈ n defined by:
[1.15]
Numbered equationAs |ejuX|= 1, the characteristic function exists and is continuous even if the moments {Xk} do not exist. For example, the Cauchy probability distribution, the probability density function of which is pX(x) = 1/π(1 + x²), has no moment and has the characteristic function e−|u|. Notice that |ϕX1…Xn(u1, …, un) | ≤ ϕX(0, …, 0) = 1.
THEOREM 1.1 (Fundamental).– The random variables {X0, X1, …, Xn−1} are independent if and only if, for any point (u0, u1, …, un−1) of n:
Notice that the characteristic function ϕXk(uk) of the marginal probability distribution of Xk can be directly calculated using [1.15]. We have
.
DEFINITION 1.13 (Mean, variance).– The mean of the random variable X is defined as the first-order moment, i.e. {X}. If the mean is equal to zero, the random variable is said to be centered. The variance of the random variable X is the quantity defined by:
[1.16]
Numbered equationThe variance is always positive, and its square root is called the standard deviation.
As an exercise, we are going to show that, for any constants a and b:
[1.17] Numbered equation
[1.18] Numbered equation
PROOF.– Expression [1.17] is a direct consequence of the integral’s linearity. From Y = aX + b and expression [1.17], we get var (Y) = {(Y − {Y })² = {a²(X − {X})²} = a² var (X).
■
A generalization of these two results to random vectors (their components are random variables) will be given by property [1.7].
DEFINITION 1.14 (Covariance, correlation).– Let {X, Y } be two random variables. The covariance of X and Y is the quantity defined by:
[1.19] Numbered equation
The correlation coefficient is the quantity defined by:
[1.20] Numbered equation
By applying the Schwartz inequality, we get |ρ(X, Y) | ≤ 1.
X and Y are said to be uncorrelated if cov (X, Y) = 0, i.e. if {XY } = {X} {Y }, therefore ρ(X, Y) = 0.
DEFINITION 1.15 (Mean vector and covariance matrix).– Let {X0, X1, …, Xn−1} be n random variables with the respective means {Xi}. The mean vector is the n dimension vector with the means {Xi} as its components. The covariance matrix C is the n × n matrix with the entry Cij = cov (Xi, Xj) for 0 ≤ i ≤ n − 1 and 0 ≤ j ≤ n − 1.
Using the matrix notation X = [X0 … Xn−1]T, the mean vector can be expressed as:
the covariance matrix can be expressed as:
[1.21] Numbered equation
and the correlation matrix can be expressed as:
[1.22] Numbered equation
with
[1.23] Numbered equation
R is obtained by dividing each element Cij of C by , provided that Cii ≠ 0. Therefore, Rii = 1 and |Rij| ≤ 1.
Notice that the diagonal elements of a covariance matrix represent the respective variances of the n random variables. They are therefore positive.
If random variables are uncorrelated, their covariance matrix is diagonal and their correlation matrix is the identity matrix.
PROPERTY 1.5 (Positivity of the covariance matrix).– Any covariance matrix is positive, meaning that for any vector a ∈ n, we have aHCa ≥ 0.
PROPERTY 1.6 (Bilinearity of the covariance).– Let {X0, X1, …, Xm−1} and {Y0, …, Yn−1} be random variables, and v0, …, vm−1, w0, …, wn−1 be arbitrary constants. Hence:
[1.24]
Numbered equationPROOF.– Indeed, let V and W be the vectors of components vi and wj, respectively, and A = VTX and B = WTY. By definition, cov (A, B) = . Replacing A and B with their respective expressions and using and , we obtain, successively:
thus demonstrating expression [1.24].
■
Using matrix notation, expression [1.24] is written in the following form:
[1.25] Numbered equation
where C designates the covariance matrix of X and Y.
PROPERTY 1.7 (Linear transformation of a random vector).– Let {X0, …, Xn−1} be n random variables. We let X the random vector whose components are Xi, {X} its mean vector and CX its covariance matrix, and let {Y0, …, Yq−1} be q random variables obtained by the linear transformation:
where A is a q × n matrix and b is a non-random vector with the adequate sizes. We then have:
DEFINITION 1.16 (White sequence).– Let {X0, …, Xn−1} be a set of n random variables. They are said to form a white sequence if var (Xi) = σ² and if cov (Xi, Xj) = 0 for i ≠ j. Hence, their covariance matrix can be expressed as follows:
where In is the n × n identity matrix.
PROPERTY 1.8 (Independence ⇒ non-correlation).– Let {X0, …, Xn−1} be n independent random variables, then they are uncorrelated. Usually, the converse statement is false.
1.2. Conditional expectation
DEFINITION 1.17 (Conditional expectation).– Let X be a random variable and Y a random vector taking values, respectively, in χ ⊂ and ⊂ q. Let pXY (x, y) be their joint probability density. The conditional expectation of X, given Y, is a (measurable) real valued function g(Y), such that, for any other real valued function h(Y), we have:
[1.26] Numbered equation
g(Y) is commonly denoted by {X|Y}.
PROPERTY 1.9 (Conditional probability distribution).– We consider a random variable X and a random vector Y taking values, respectively, in χ ⊂ and y ⊂ q with joint probability density pXY (x, y). Then, {X|Y } = g(Y) with:
where
[1.27]
Numbered equationpX|Y (x, y) is called the conditional probability distribution of X given Y.
PROPERTY 1.10.– The conditional expectation verifies the following properties:
1) linearity: {a1X1 + a2X2|Y } = a1 {X1|Y } + a2E {X2|Y };
2) orthogonality: {(X − {X|Y })h(Y)} = 0 for any function h : ;
3) {h(Y)f(X)|Y } = h(Y) {f(X)|Y }, for all functions f : χ and h : ;
4) { {f(X, Y)|Y}} = {f(X, Y)} for any function f : χ × ; specifically
5) refinement by conditioning: it can be shown (see page 14) that
[1.28] Numbered equation
That has a clear meaning: the variance is reduced by conditioning;
6) if X and Y are independent, then {f(X)|Y } = {f(X)}. Specifically, {X|Y } = {X}. The reciprocal is not true;
7) {X|Y } = X, if and only if X is a function of Y.
1.3. Projection theorem
DEFINITION 1.18 (Dot product).– Let be a vector space constructed over . The dot product is an application
which verifies the following properties:
– (X, Y) = (Y, X)*;
– (αX + βY, Z) = α(X, Z) + β(Y, Z);
– (X, X) ≥ 0. The equality occurs if and only if X = 0.
A vector space is a Hilbert space if it is complete with respect to its dot product3. The norm of X is defined by and the distance between