Computational Biology As Another Scienti
Computational Biology As Another Scienti
Iterative method
Rate of convergence — the speed at which a convergent sequence
approaches its limit
Order of accuracy — rate at which numerical solution of differential equation
converges to exact solution
Series acceleration — methods to accelerate the speed of convergence of a
series
Aitken's delta-squared process — most useful for linearly converging
sequences
Minimum polynomial extrapolation — for vector sequences
Richardson extrapolation
Shanks transformation — similar to Aitken's delta-squared process, but
applied to the partial sums
Van Wijngaarden transformation — for accelerating the convergence of an
alternating series
Abramowitz and Stegun — book containing formulas and tables of many
special functions
Digital Library of Mathematical Functions — successor of book by
Abramowitz and Stegun
Curse of dimensionality
Local convergence and global convergence — whether you need a good
initial guess to get convergence
Superconvergence
Discretization
Difference quotient
Loss of significance
Numerical error
Numerical stability
Error propagation:
Propagation of uncertainty
Significance arithmetic
Residual (numerical analysis)
Relative change and difference — the relative difference between x and y is |
x − y| / max(|x|, |y|)
Significant figures
False precision — giving more significant figures than appropriate
Truncation error — error committed by doing only a finite numbers of steps
Affine arithmetic
Elementary and special functions[edit]
Summation:
Kahan summation algorithm
Pairwise summation — slightly worse than Kahan summation but cheaper
Binary splitting
Multiplication:
Multiplication algorithm — general discussion, simple methods
Karatsuba algorithm — the first algorithm which is faster than
straightforward multiplication
Toom–Cook multiplication — generalization of Karatsuba multiplication
Schönhage–Strassen algorithm — based on Fourier transform, asymptotically
very fast
Fürer's algorithm — asymptotically slightly faster than Schönhage–Strassen
Division algorithm — for computing quotient and/or remainder of two
numbers
Long division
Restoring division
Non-restoring division
SRT division
Newton–Raphson division: uses Newton's method to find the reciprocal of D,
and multiply that reciprocal by N to find the final quotient Q.
Goldschmidt division
Exponentiation:
Exponentiation by squaring
Addition-chain exponentiation
Multiplicative inverse Algorithms: for computing a number's multiplicative
inverse (reciprocal).
Newton's method
Polynomials:
Horner's method
Estrin's scheme — modification of the Horner scheme with more possibilities
for parallelization
Clenshaw algorithm
De Casteljau's algorithm
Square roots and other roots:
Integer square root
nth root algorithm
Shifting nth root algorithm — similar to long division
hypot — the function (x2 + y2)1/2
Alpha max plus beta min algorithm — approximates hypot(x,y)
Fast inverse square root — calculates 1 / √x using details of the IEEE
floating-point system
Elementary functions (exponential, logarithm, trigonometric functions):
Trigonometric tables — different methods for generating them
CORDIC — shift-and-add algorithm using a table of arc tangents
BKM algorithm — shift-and-add algorithm using a table of logarithms and
complex numbers
Gamma function:
Lanczos approximation
Spouge's approximation — modification of Stirling's approximation; easier to
apply than Lanczos
AGM method — computes arithmetic–geometric mean; related methods
compute special functions
FEE method (Fast E-function Evaluation) — fast summation of series like the
power series for ex
Gal's accurate tables — table of function values with unequal spacing to
reduce round-off error
Spigot algorithm — algorithms that can compute individual digits of a real
number
Approximations of π:
Liu Hui's π algorithm — first algorithm that can compute π to arbitrary
precision
Leibniz formula for π — alternating series with very slow convergence
Wallis product — infinite product converging slowly to π/2
Viète's formula — more complicated infinite product which converges faster
Gauss–Legendre algorithm — iteration which converges quadratically to π,
based on arithmetic–geometric mean
Borwein's algorithm — iteration which converges quartically to 1/π, and
other algorithms
Chudnovsky algorithm — fast algorithm that calculates a hypergeometric
series
Bailey–Borwein–Plouffe formula — can be used to compute individual
hexadecimal digits of π
Bellard's formula — faster version of Bailey–Borwein–Plouffe formula
Gaussian elimination
Row echelon form — matrix in which all entries below a nonzero entry are
zero
Bareiss algorithm — variant which ensures that all entries remain integers if
the initial matrix has integer entries
Tridiagonal matrix algorithm — simplified form of Gaussian elimination for
tridiagonal matrices
LU decomposition — write a matrix as a product of an upper- and a lower-
triangular matrix
Crout matrix decomposition
LU reduction — a special parallelized version of a LU decomposition
algorithm
Block LU decomposition
Cholesky decomposition — for solving a system with a positive definite
matrix
Minimum degree algorithm
Symbolic Cholesky decomposition
Iterative refinement — procedure to turn an inaccurate solution in a more
accurate one
Direct methods for sparse matrices:
Frontal solver — used in finite element methods
Nested dissection — for symmetric matrices, based on graph partitioning
Levinson recursion — for Toeplitz matrices
SPIKE algorithm — hybrid parallel solver for narrow-banded matrices
Cyclic reduction — eliminate even or odd rows or columns, repeat
Iterative methods:
Jacobi method
Gauss–Seidel method
Successive over-relaxation (SOR) — a technique to accelerate the Gauss–
Seidel method
Symmetric successive overrelaxation (SSOR) — variant of SOR for symmetric
matrices
Backfitting algorithm — iterative procedure used to fit a generalized additive
model, often equivalent to Gauss–Seidel
Modified Richardson iteration
Conjugate gradient method (CG) — assumes that the matrix is positive
definite
Derivation of the conjugate gradient method
Nonlinear conjugate gradient method — generalization for nonlinear
optimization problems
Biconjugate gradient method (BiCG)
Biconjugate gradient stabilized method (BiCGSTAB) — variant of BiCG with
better convergence
Conjugate residual method — similar to CG but only assumed that the matrix
is symmetric
Generalized minimal residual method (GMRES) — based on the Arnoldi
iteration
Chebyshev iteration — avoids inner products but needs bounds on the
spectrum
Stone's method (SIP – Srongly Implicit Procedure) — uses an incomplete LU
decomposition
Kaczmarz method
Preconditioner
Incomplete Cholesky factorization — sparse approximation to the Cholesky
factorization
Incomplete LU factorization — sparse approximation to the LU factorization
Uzawa iteration — for saddle node problems
Underdetermined and overdetermined systems (systems that have no or
more than one solution):
Numerical computation of null space — find all solutions of an
underdetermined system
Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm
(for underdetermined systems) or smallest residual
Sparse approximation — for finding the sparsest solution (i.e., the solution
with as many zeros as possible)
Bilinear interpolation
Trilinear interpolation
Bicubic interpolation
Tricubic interpolation
Padua points — set of points in R2 with unique polynomial interpolant and
minimal growth of Lebesgue constant
Hermite interpolation
Birkhoff interpolation
Abel–Goncharov interpolation
Approximation theory:
Orders of approximation
Lebesgue's lemma
Curve fitting
Vector field reconstruction
Modulus of continuity — measures smoothness of a function
Least squares (function approximation) — minimizes the error in the L2-norm
Minimax approximation algorithm — minimizes the maximum error over an
interval (the L∞-norm)
Equioscillation theorem — characterizes the best approximation in the L∞-
norm
Unisolvent point set — function from given function space is determined
uniquely by values on such a set of points
Stone–Weierstrass theorem — continuous functions can be approximated
uniformly by polynomials, or certain other function spaces
Approximation by polynomials:
Linear approximation
Bernstein polynomial — basis of polynomials useful for approximating a
function
Bernstein's constant — error when approximating |x| by a polynomial
Remez algorithm — for constructing the best polynomial approximation in
the L∞-norm
Bernstein's inequality (mathematical analysis) — bound on maximum of
derivative of polynomial in unit disk
Mergelyan's theorem — generalization of Stone–Weierstrass theorem for
polynomials
Müntz–Szász theorem — variant of Stone–Weierstrass theorem for
polynomials if some coefficients have to be zero
Bramble–Hilbert lemma — upper bound on Lp error of polynomial
approximation in multiple dimensions
Discrete Chebyshev polynomials — polynomials orthogonal with respect to a
discrete measure
Favard's theorem — polynomials satisfying suitable 3-term recurrence
relations are orthogonal polynomials
Approximation by Fourier series / trigonometric polynomials:
Jackson's inequality — upper bound for best approximation by a
trigonometric polynomial
Bernstein's theorem (approximation theory) — a converse to Jackson's
inequality
Fejér's theorem — Cesàro means of partial sums of Fourier series converge
uniformly for continuous periodic functions
Erdős–Turán inequality — bounds distance between probability and
Lebesgue measure in terms of Fourier coefficients
Different approximations:
Moving least squares
Padé approximant
Padé table — table of Padé approximants
Hartogs–Rosenthal theorem — continuous functions can be approximated
uniformly by rational functions on a set of Lebesgue measure zero
Szász–Mirakyan operator — approximation by e−n xk on a semi-infinite
interval
Szász–Mirakjan–Kantorovich operator
Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan
operators, and Lupas operators
Favard operator — approximation by sums of Gaussians
Surrogate model — application: replacing a function that is hard to evaluate
by a simpler function
Constructive function theory — field that studies connection between degree
of approximation and smoothness
Universal differential equation — differential–algebraic equation whose
solutions can approximate any continuous function
Fekete problem — find N points on a sphere that minimize some kind of
energy
Carleman's condition — condition guaranteeing that a measure is uniquely
determined by its moments
Krein's condition — condition that exponential sums are dense in weighted
L2 space
Lethargy theorem — about distance of points in a metric space from
members of a sequence of subspaces
Wirtinger's representation and projection theorem
Extrapolation:
Linear predictive analysis — linear extrapolation
Unisolvent functions — functions for which the interpolation problem has a
unique solution
Regression analysis
Isotonic regression
Curve-fitting compaction
Interpolation (computer graphics)
Finding roots of nonlinear equations[edit]
See #Numerical linear algebra for linear equations
Root-finding algorithm — algorithms for solving the equation f(x) = 0
Aberth method
Bairstow's method
Durand–Kerner method
Graeffe's method
Jenkins–Traub algorithm — fast, reliable, and widely used
Laguerre's method
Splitting circle method
Analysis:
Wilkinson's polynomial
Numerical continuation — tracking a root as one parameter in the equation
changes
Piecewise linear continuation
Optimization[edit]
Mathematical optimization — algorithm for finding maxima or minima of a
given function
Basic concepts[edit]
Active set
Candidate solution
Constraint (mathematics)
Constrained optimization — studies optimization problems with constraints
Binary constraint — a constraint that involves exactly two variables
Corner solution
Feasible region — contains all solutions that satisfy the constraints but may
not be optimal
Global optimum and Local optimum
Maxima and minima
Slack variable
Continuous optimization
Discrete optimization
Decompositions:
Benders' decomposition
Dantzig–Wolfe decomposition
Theory of two-level planning
Variable splitting
Basic solution (linear programming) — solution at vertex of feasible region
Fourier–Motzkin elimination
Hilbert basis (linear programming) — set of integer vectors in a convex cone
which generate all integer vectors in the cone
LP-type problem
Linear inequality
Vertex enumeration problem — list all vertices of the feasible set
Convex optimization[edit]
Convex optimization
Quadratic programming
Linear least squares (mathematics)
Total least squares
Frank–Wolfe algorithm
Sequential minimal optimization — breaks up large QP problems into a series
of smallest possible QP problems
Bilinear program
Basis pursuit — minimize L1-norm of vector subject to linear constraints
Basis pursuit denoising (BPDN) — regularized version of basis pursuit
In-crowd algorithm — algorithm for solving basis pursuit denoising
Linear matrix inequality
Conic optimization
Semidefinite programming
Second-order cone programming
Sum-of-squares optimization
Quadratic programming (see above)
Bregman method — row-action method for strictly convex optimization
problems
Proximal gradient method — use splitting of objective function in sum of
possible non-differentiable pieces
Subgradient method — extension of steepest descent for problems with a
non-differentiable objective function
Biconvex optimization — generalization where objective function and
constraint set can be biconvex
Nonlinear programming[edit]
Nonlinear programming — the most general optimization problem in the
usual framework
Special cases of nonlinear programming:
See Linear programming and Convex optimization above
Geometric programming — problems involving signomials or posynomials
Signomial — similar to polynomials, but exponents need not be integers
Posynomial — a signomial with positive coefficients
Quadratically constrained quadratic program
Linear-fractional programming — objective is ratio of linear functions,
constraints are linear
Fractional programming — objective is ratio of nonlinear functions,
constraints are linear
Nonlinear complementarity problem (NCP) — find x such that x ≥ 0, f(x) ≥ 0
and xT f(x) = 0
Least squares — the objective function is a sum of squares
Non-linear least squares
Gauss–Newton algorithm
BHHH algorithm — variant of Gauss–Newton in econometrics
Generalized Gauss–Newton method — for constrained nonlinear least-
squares problems
Levenberg–Marquardt algorithm
Iteratively reweighted least squares (IRLS) — solves a weigted least-squares
problem at every iteration
Partial least squares — statistical techniques similar to principal components
analysis
Non-linear iterative partial least squares (NIPLS)
Mathematical programming with equilibrium constraints — constraints
include variational inequalities or complementarities
Univariate optimization:
Golden section search
Successive parabolic interpolation — based on quadratic interpolation
through the last three iterates
Guess value — the initial guess for a solution with which an algorithm starts
Line search
Backtracking line search
Wolfe conditions
Gradient method — method that uses the gradient as the search direction
Gradient descent
Stochastic gradient descent
Landweber iteration — mainly used for ill-posed problems
Successive linear programming (SLP) — replace problem by a linear
programming problem, solve that, and repeat
Sequential quadratic programming (SQP) — replace problem by a quadratic
programming problem, solve that, and repeat
Newton's method in optimization
See also under Newton algorithm in the section Finding roots of nonlinear
equations
Nonlinear conjugate gradient method
Derivative-free methods
Coordinate descent — move in one of the coordinate directions
Adaptive coordinate descent — adapt coordinate directions to objective
function
Random coordinate descent — randomized version
Nelder–Mead method
Pattern search (optimization)
Powell's method — based on conjugate gradient descent
Rosenbrock methods — derivative-free method, similar to Nelder–Mead but
with guaranteed convergence
Augmented Lagrangian method — replaces constrained problems by
unconstrained problems with a term added to the objective function
Ternary search
Tabu search
Guided Local Search — modification of search algorithms which builds up
penalties during a search
Reactive search optimization (RSO) — the algorithm adapts its parameters
automatically
MM algorithm — majorize-minimization, a wide framework of methods
Least absolute deviations
Expectation–maximization algorithm
Ordered subset expectation maximization
Adaptive projected subgradient method
Nearest neighbor search
Space mapping — uses "coarse" (ideal or low-fidelity) and "fine" (practical or
high-fidelity) models
Optimal control and infinite-dimensional optimization[edit]
Optimal control
Pontryagin's minimum principle — infinite-dimensional version of Lagrange
multipliers
Costate equations — equation for the "Lagrange multipliers" in Pontryagin's
minimum principle
Hamiltonian (control theory) — minimum principle says that this function
should be minimized
Types of problems:
Linear-quadratic regulator — system dynamics is a linear differential
equation, objective is quadratic
Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE
with additive noise, objective is quadratic
Optimal projection equations — method for reducing dimension of LQG
control problem
Algebraic Riccati equation — matrix equation occurring in many optimal
control problems
Bang–bang control — control that switches abruptly between two states
Covector mapping principle
Differential dynamic programming — uses locally-quadratic models of the
dynamics and cost functions
DNSS point — initial state for certain optimal control problems with multiple
optimal solutions
Legendre–Clebsch condition — second-order condition for solution of optimal
control problem
Pseudospectral optimal control
Bellman pseudospectral method — based on Bellman's principle of optimality
Chebyshev pseudospectral method — uses Chebyshev polynomials (of the
first kind)
Flat pseudospectral method — combines Ross–Fahroo pseudospectral
method with differential flatness
Gauss pseudospectral method — uses collocation at the Legendre–Gauss
points
Legendre pseudospectral method — uses Legendre polynomials
Pseudospectral knotting method — generalization of pseudospectral methods
in optimal control
Ross–Fahroo pseudospectral method — class of pseudospectral method
including Chebyshev, Legendre and knotting
Ross–Fahroo lemma — condition to make discretization and duality
operations commute
Ross' π lemma — there is fundamental time constant within which a control
solution must be computed for controllability and stability
Sethi model — optimal control problem modelling advertising
Infinite-dimensional optimization
Semi-infinite programming — infinite number of variables and finite number
of constraints, or other way around
Shape optimization, Topology optimization — optimization over a set of
regions
Topological derivative — derivative with respect to changing in the shape
Generalized semi-infinite programming — finite number of variables, infinite
number of constraints
Geometric median — the point minimizing the sum of distances to a given set
of points
Chebyshev center — the centre of the smallest ball containing a given set of
points
Iterated conditional modes — maximizing joint probability of Markov random
field
Response surface methodology — used in the design of experiments
Automatic label placement
Compressed sensing — reconstruct a signal from knowledge that it is sparse
or compressible
Cutting stock problem
Demand optimization
Destination dispatch — an optimization technique for dispatching elevators
Energy minimization
Entropy maximization
Highly optimized tolerance
Hyperparameter optimization
Inventory control problem
Newsvendor model
Extended newsvendor model
Assemble-to-order system
Linear programming decoding
Linear search problem — find a point on a line by moving along the line
Low-rank approximation — find best approximation, constraint is that rank of
some matrix is smaller than a given number
Meta-optimization — optimization of the parameters in an optimization
method
Multidisciplinary design optimization
Optimal computing budget allocation — maximize the overall simulation
efficiency for finding an optimal decision
Paper bag problem
Process optimization
Recursive economics — individuals make a series of two-period optimization
decisions over time.
Stigler diet
Space allocation problem
Stress majorization
Trajectory optimization
Transportation theory
Wing-shape optimization
Miscellaneous[edit]
Combinatorial optimization
Dynamic programming
Bellman equation
Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman
equation
Backward induction — solving dynamic programming problems by reasoning
backwards in time
Optimal stopping — choosing the optimal time to take a particular action
Odds algorithm
Robbins' problem
Global optimization:
BRST algorithm
MCS algorithm
Multi-objective optimization — there are multiple conflicting objectives
Benson's algorithm — for linear vector optimization problems
Bilevel optimization — studies problems in which one problem is embedded
in another
Optimal substructure
Dykstra's projection algorithm — finds a point in intersection of two convex
sets
Algorithmic concepts:
Barrier function
Penalty method
Trust region
Test functions for optimization:
Rosenbrock function — two-dimensional function with a banana-shaped
valley
Himmelblau's function — two-dimensional with four local minima, defined by
f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2
Rastrigin function — two-dimensional function with many local minima
Shekel function — multimodal and multidimensional
Mathematical Optimization Society
Numerical quadrature (integration)[edit]
Numerical integration — the numerical evaluation of an integral
Rectangle method — first-order method, based on (piecewise) constant
approximation
Trapezoidal rule — second-order method, based on (piecewise) linear
approximation
Simpson's rule — fourth-order method, based on (piecewise) quadratic
approximation
Adaptive Simpson's method
Boole's rule — sixth-order method, based on the values at five equidistant
points
Newton–Cotes formulas — generalizes the above methods
Romberg's method — Richardson extrapolation applied to trapezium rule
Gaussian quadrature — highest possible degree with given number of points
Chebyshev–Gauss quadrature — extension of Gaussian quadrature for
integrals with weight (1 − x2)±1/2 on [−1, 1]
Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals
with weight exp(−x2) on [−∞, ∞]
Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals
with weight (1 − x)α (1 + x)β on [−1, 1]
Gauss–Laguerre quadrature — extension of Gaussian quadrature for
integrals with weight exp(−x) on [0, ∞]
Gauss–Kronrod quadrature formula — nested rule based on Gaussian
quadrature
Gauss–Kronrod rules
Tanh-sinh quadrature — variant of Gaussian quadrature which works well
with singularities at the end points
Clenshaw–Curtis quadrature — based on expanding the integrand in terms of
Chebyshev polynomials
Adaptive quadrature — adapting the subintervals in which the integration
interval is divided depending on the integrand
Monte Carlo integration — takes random samples of the integrand
Analysis:
Truncation error (numerical integration) — local and global truncation
errors, and their relationships
Lady Windermere's Fan (mathematics) — telescopic identity relating local
and global truncation errors
Stiff equation — roughly, an ODE for which unstable methods need a very
short step size, but stable methods do not
L-stability — method is A-stable and stability function vanishes at infinity
Dynamic errors of numerical methods of ODE discretization — logarithm of
stability function
Adaptive stepsize — automatically changing the step size when that seems
advantageous
Cell lists
Coupled cluster
Density functional theory
DIIS — direct inversion in (or of) the iterative subspace
Now, if one were to pick any given set of approaches, one would have to first
ensure that the set of approaches is commutative, i.e. many of these
approaches assume slightly different ontological premises and thus cannot
be used together, which is not an issue for mathematics per se, since it
doesn't claim to imply any relation to empirical data. However it becomes an
issue in applying computational approaches to empirical data.
Even within the simplest mathematical systems, for instance regular
arithmetic and simple linear algebra, there is no rational transition between
the two. We tend to assume there is primarily because most of us made the
transition initially as children and habituated the change in the basis of
understanding the symbology involved. In terms of the process of learning,
this is reinforced by the sense that simple algebra is somehow based on
arithmetic, since arithmetic is always an assumed prerequisite. However this
is illusory. The prerequisite practice of arithmetic is simply the habituating
of the ability to manipulate mathematical symbols in the most general sense,
it in no way implies that there is any necessary or even contingent relation
between the symbology of arithmetic and simple algebra. Within
computationl mathematics, which is only a small subset of mathematical
systems in general, there are dozens of underlying mathematical systems
that have no rational transitions, i.e. are non-commutative. Within
computation, which generally runs on a 'good enough' approach, this only
occasionally creates issues. However if one is trying to model a given system
accurately rather than simply using a 'good enough' simulation to provide an
optimization to a purely computational problem, this simultaneous use of
non-commutative approaches cannot be permitted.
Once it is confirmed that the set is fully commutative (which we have no
quick or simple means of doing) one would have to ensure that the operative
ontology in all the approaches is a sensible one in terms of understanding
and manipulating empirical biological data, i.e. determining that the
operative ontology of the mathematical approaches is identical to the actual
operative ontology of real biological systems. We have no theoretical means
of accomplishing this, never mind a practical method.
Even within the small set of biologically inspired computational approaches,
while it is true that the models behave in a manner that is somewhat similar
to the biological systems they were inspired by, it is also true that they do not
do so with any accuracy. This lack of realistic precision could be due to the
model being a relatively closed system when compared with the actual
biological system, or it could be due to the model failing to take into account
or failing to accurately determine initial conditions for all relevant
parameters, or it could be that the model is based on invalid ontological
assumptions and merely mimics a certain aspect of reality without implying
anything ontologically valid about reality. There's no means to distinguish
between these potential origins of inaccuracy except by modeling the system
in question and its entire spatio-temporal environment, which is nothing
other than the rest of reality itself, with all parameters accurately
determined. The only feasible model we can ever have for that is reality
itself.
While even the more complex single celled systems are beyond the modeling
capability of computational mathematics, multicellular cell differentiated
systems are beyond the capability of computation in a more general sense. In
a similar manner to the lack of commutativity between different
mathematical systems, there is no commutativity between a single cell
system and a multicellular, cell differentiated system, i.e. there is no rational
transition, since the generation of a more comprehensive generic view is
itself an ontological, not a rational exercise.