0% found this document useful (0 votes)
184 views87 pages

Hinch, Perturbation Methods, 1995

Uploaded by

JoaquimJossy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
184 views87 pages

Hinch, Perturbation Methods, 1995

Uploaded by

JoaquimJossy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 87
Perturbation Methods oe Oa Cambridge texts in applied mathematics Maximum and minimum principles: a unified approach with applications M.J. SEWELL Introduction to numerical linear algebra and optimization P.G. CIARLET Solitons: an introduction P.G. DRAZIN AND R.S. JOHNSON Integral equations: from spectral theory to applications D. PORTER AND DAVID S.G. STIRLING Perturbation methods E.J. HINCH Perturbation Methods E.J. HINCH Lecturer in Mathematics, University of Cambridge a4 CAMBRIDGE ) UNIVERSITY PRESS Published by the Press Syndicate of the University of Cambridge Contents The Pitt Building, Trumpington Street, Cambridge CB2 1RP 40 West 20th Street, New York, NY 10011-4211, USA 10 Stamford Road, Oakleigh, Melbourne 3166, Australia © Cambridge University Press 1991 First published 1991 Reprinted 1992, 1994, 1995 Printed in the United States of America Library of Congress Cataloging-in-Publication Data is available. Preface page ix A catalogue record for this book is available from the British Library. ISBN 0-521-37310-7 hardback 1 Algebraic equations 1 ee 1.1 i and expansion 1 Iterative method 2 Expansion method 3 1.2 Singular perturbations and rescaling 4 Iterative method 5 Expansion method 5 Rescaling in the expansion method 6 1.3 Non-integral powers 8 Finding the expansion sequence 9 Iterative method 11 1.4 Logarithms 12 1.5 Convergence 14 1.6 Eigenvalue problems 15 Second order perturbations 16 Multiple roots 17 Degenerate roots 18 2 Asymptotic approximations 19 2.1 Convergence and asymptoticness 19 2.2 Definitions 20 2.3 Uniqueness and manipulations 21 2.4 Why asymptotic? 23 Numerical use of diverging series 24 2.5 Parametric expansions 25 2.6 Stokes phenomenon in the complex plane 26 vi 3.2 3.3 3.4 3.5 4.1 4.2 4.3 5.2 Contents Integrals Watson’s lemma Application and explanation Integration by parts Steepest descents Global considerations Local considerations Ezrample: Stirling’s formula Example: Airy function Non-local contributions Example 1 Example 2 Splitting a range of integration Logarithms An integral equation: the electrical capacity of a long slender body Regular perturbation problems in partial differential equations Potential outside a near sphere Deformation of a slowly rotating self-gravitating liquid mass Nearly uniform inertial flow past a cylinder Matched asymptotic expansion A linear problem 5.1.1 The exact solution 5.1.2 The outer approximation 5.1.3 The inner approximation (or boundary layer solution) 5.1.4 Matching 5.1.5 Van Dyke’s matching rule 5.1.6 Choice of stretching 5.1.7 Where is the boundary layer? 5.1.8 Composite approximations Logarithms 5.2.1 The problem and initial observations 5.2.2 Approximation for r fixed as € \, 0 5.2.3 Approximation for p = er fixed as « \, 0 5.2.4 Matching by intermediate variable 5.2.5 Further terms 5.2.6 Failure of Van Dyke’s matching rule 27 27 28 29 30 30 32 33 34 37 37 38 39 41 42 46 46 50 52 53 53 54 55 57 58 60 63 65 67 68 69 69 69 70 72 5.3 5.4 5.5 5.6 6.1 6.2 6.3 7.1 (2 7.3 7.4 7.5 7.6 8.1 8.2 Contents 5.2.7 Composite approximations 5.2.8 A worse problem 5.2.9 A terrible problem Slow viscous flow 5.3.1 Past a sphere 5.3.2 Past a cylinder Slender body theory 5.4.1 Electrical capacitance 5.4.2 Axisymmetric potential flow Moon space ship problem van der Pol relaxation oscillator Method of strained co-ordinates A model problem 6.1.1 Solution by strained co-ordinates 6.1.2 Solution by matched asymptotic expansions Duffing’s oscillator Shallow water waves Method of multiple scales van der Pol oscillator Instability of the Mathieu equation A diffusion—advection equation Homogenised media The WKBJ approximation 7.5.1 Solution by multiple scales 7.5.2 Higher approximations 7.5.3 Turning points 7.5.4 Examples 7.5.5 Use of the WKBJ approximation to study an exponentially small term 7.5.6 The small reflected wave in the WKBJ approximation Slowly varying waves 7.6.1 A model problem 7.6.2 Ray theory 7.6.3 Averaged Lagrangian Improved convergence Radius of convergence and the Domb-Sykes plot Improved series Inversion vil 73 74 77 81 81 84 87 87 90 93 97 102 103 104 106 108 110 116 116 120 123 126 127 128 130 131 133 135 136 137 137 140 141 144 144 147 147 viii Contents Taking a root Multiplicative extraction Additive extraction Euler transformation Shanks transformation Padé approzimants Bibliography Index 148 149 149 149 150 151 155 158 Preface Making precise approximations to solve equations is an occupation of applied mathematicians which distinguishes them from pure mathemati- cians, physicists and engineers. A precise approximation is not a con- tradiction in terms but rather an approximation with an error which is understood and controllable; in particular the error could be made smaller by some rational procedure. There are two methods for obtain- ing precise approximations to the solutions of an equation, numerical methods and analytic methods, and this book is about the latter. The analytic approximations are obtained when some parameter of the prob- lem is small, and hence the name perturbation methods. The perturba- tion and numerical methods are not however in competition but rather complement one another as the following example illustrates. The van der Pol oscillator is governed by the equation £+ka(z?-1)+2 = 0 In time the solution tends to an oscillation with a particular amplitude which does not depend on the initial conditions. The period of this limit oscillation is of interest and is plotted in figure 1 as a function of the strength of the nonlinear friction, k. The circles give the numerical results obtained by a Runge-Kutta method. The dashed curves give the first and second order perturbation approximations Qn (1+ 4k? + O(k*)) ask 0 k(3 — 21n2) + 7.0143k-1/3 + O(k-!Ink) as k + 00 At intermediate values of the parameter k, from 2 to 6, the numerical method is most useful. At extreme values however the numerical method loses its accuracy rapidly, for example by k = 10 the time-step must be reduced to 0.01 in order to obtain 5 figure accuracy. The analytic approximations take over in the extreme conditions. Further they give an explicit dependence on the parameter k rather than the isolated results Period = x Preface 15 10 Period 8 0 4 k Fig. 1 The period of the limit oscillation of the van der Pol oscillator as a function of the strength of the nonlinear friction k. at particular values from the numerical method. But the most important feature of the figure is the satisfying agreement between the numerical approximation and the two independent perturbation approximations — such checks are essential in research. Obtaining good numerical values for the solution is not the only quest of a perturbation approximation. One can hope that the analysis will reveal some physical insights through the simplified physics of the lim- iting problem. In this book I will however suppress the physics in the problems discussed. Finding perturbation approximations is an art rather than a science. In research it is useful to be responsive to suggestions from the physics. There is certainly no routine method appropriate to all problems, or even classes of problems. Instead one needs a determination to exploit the smallness of the parameter. This book attempts to present many of the weapons which have been found useful, but they should not be viewed as exhaustive. While this book is mathematical, no attempt has been made to make the arguments fully rigorous. In general I have tried to explain why the results are correct. Often these reasons can be turned into strict theorems, albeit with some difficulty in the case of singular problems. My own opinion is that such superficial rigour rarely adds to the under- Preface xi standing of the problem, and that of greater use is a numerical statement about the range of applicability achieving some specified accuracy. This book is based on a course of lectures which I gave for a number of years to first year graduate students in the University of Cambridge. In its turn it was based on my own education from a course of lectures by L. E. Fraenkel and from the book on the subject by M. Van Dyke. These two inspiring teachers asked many interesting questions which I have attempted to answer in this book; questions such as why are some results convergent whilst others only asymptotic, why is matching pos- sible, what selection criterion should be used with strained co-ordinates, and what characterises problems to be tackled by multiple scales. While no previous knowledge of perturbation methods is assumed, some previous experience is probable. The students who attended my lecture course would have seen several examples (small friction on projec- tiles, perturbed energy levels in quantum mechanics, adiabatic invariants in Hamiltonian systems, Watson’s lemma, and viscous boundary layers in fluid mechanics) usually presented in an informal way relying heavily on physical insight. They would not however have seen a formal and organised approach to a perturbation problem. The eventual goal of this book is to present the method of matched asymptotic expansions and the method of multiple scales, progressing to an advanced level in considering the more difficult issues such as the occurrence of logarithms and the occurrence of more than two scales. Tackling differential equations with such singular perturbation problems is certainly not easy. Fortunately many of the essential concepts can be presented in the simpler context of algebraic equations and later with in- tegrals. Thus issues such as iterations and expansions, singular problems and rescaling, non-integral powers and logarithms will be presented well before the difficult singular differential equations are encountered. Fi- nally I should observe that most of the chapters follow the basic method with an advanced application whose understanding is not essential to the following chapters — thus §§ 1.6, 3.5, 5.3, 5.4, 5.5, 5.6, 6.3, 7.3, 7.4 and 7.6 should be viewed as optional. E.J. Hinch Cambridge, 1990 1 Algebraic equations Many of the techniques of perturbation analysis can be introduced in the simple setting of algebraic equations. By starting with some partic- ularly easy algebraic equations, three quadratics, we can benefit from the luxury of the existence of exact answers, taking useful hints from them to overcome difficulties. 1.1 Iteration and expansion We start with the equation for x which contains the parameter e, ge+exr—-1 = 0 This has exact solutions z= —fet /1+ je which can be expanded for small € as _ fat — fer fe? — ayes + OC€°) 7 * —1— de — de? + sheet + O(€®) 206 128 These binomial expansions converge if |e| < 2. More important than converging, the truncated series give a good approximation if € is small. The first few terms give a result within 3% of the exact result if Je] < 0.05 0.5 1.2 1.6 2 1 Algebraic equations The last 1.6 being not too far from the convergence boundary. Alterna- tively for the fixed value of ¢ = 0.1 the first few terms give Coe ed 0.95 0.95125 0.95124921... exact = 0.95124922... Often the numerical summation of these short expansions involves less computer time than the evaluation of the exact answer with its costly surds. We started by finding the exact solution of the quadratic equation and then we expanded the exact solution. In most problems, however, it is not possible to find the exact solution. We must therefore develop tech- niques which first make the approximations and then, only afterwards, involve a calculation. There are two distinct methods of first approx- imating and then calculating, the iterative method and the expansion method. Each method has its own advantages and disadvantages. Iterative method We start with the iterative method, because it is a method which is often overlooked although it has much to offer. The first step of the iterative method is to find a rearrangement of the original equation which will become the basis of an iterative pro- cess. This first step involves a certain amount of inspiration which must therefore count as a major drawback of the method. A suitable re- arrangement of our present quadratic is z= +Vl-ex Any solution of the original equation is a solution of this rearrangement and vice versa. Working with just the positive root, we thus adopt the iterative pro- cess Zag. = V1 - ef, The iterative process needs a starting point, the value of the root when €=0,2%)=1. 1.1 Iteration and expansion 3 Making the first iteration, we find Z, = vi-e which can be expanded in a binomial series 1 1 = j—i,--12 134... Zz, = 1-gZe- ge ie + Looking at the exact answer, we see that the e? and higher terms are erroneous. We therefore truncate the series for 1, after the second term: zy = 1-je+-- Proceeding to the next iteration, we find Ly = 1/1—e(1— $e) which can be expanded, this time retaining only terms up to e?: 1— de(1 — de) — d°(14+---)? +: a 1 1,2 = 1-he+ het... 2 We note that the e? term is now correct after two iterations. Iterating again, we find 1—e(1— de + fe?) 1—Le(1— Let be?) — Bede tens)? hela Pee = 1l-he+}e?+0e%+--- It is clear that progressively more work is required to obtain the higher order terms by the iterative method. The method also has the unde- sirable feature that in the early iterations it gives erroneous values to the higher terms. One can only check that a term is correct by making one more iteration, which of course is usually convincing but no rigorous proof (but see §1.5). Expansion method The first step of the expansion method is to set « = 0 and find the unperturbed roots z = +1. Then one poses an expansion about one of these roots, say x = +1, expanding in powers of e, i.e. a(e) = l+ex, +e? +682, 4+°°: 4 1 Algebraic equations This expansion is formally substituted into the governing quadratic equation. Po eer ee oe) eee) + € + ez, + ez, tee -1 = 0 Here one ignores potential difficulties in making the substitution such as the limitations in multiplying series term by term. The coefficients of the powers of € on the two sides of the equation are now compared. At e°: 1-1=0 This level is satisfied automatically because we started the expansion from the correct value z = 1 at € = 0. At e}; 27,+1=0 ie, z,=—-} At é; 2z?4+2r,+2, = 0 ie. r=} Here the previously determined value of x, has been used. At e3: Qz,2,+%,+2,=0 ie. r,=0 again using the previously determined values of r, and zy. The expansion method is much easier than the iterative method when working to higher orders. To use the expansion method, however, it is necessary to assume that the result can be expanded in powers of € and that the formal substitution and associated manipulations are permitted. Exercise 1.1. Find four terms in the expansion of the root near xz = —1, using both the iterative and expansion methods. 1.2 Singular perturbations and rescaling In this section we study the quadratic ex? +2-1=0 If ¢ = 0 there is just one root, at sr = 1, whereas when ¢ # 0 there are two roots. This is an example of a singular perturbation problem, in which the limit point « = 0 differs in an important way from the approach to the limit « — 0. Interesting problems are often singular. Problems which are not singular are said to be regular. To resolve the paradox of the behaviour of the second root we take the exact solutions to the quadratic and expand them for small e (convergent 1.2 Singular perturbations and rescaling 5 if |e| < }). The two roots are . 1—¢+2e? —5e3 +... —l/e-l+e—2c? +53 +--. Thus the singular second root evaporates off to z = 00 in the limit € = 0. Iterative method To set up an iterative process for the singular root we argue as follows. In order to retain the second solution of the governing quadratic, it is necessary to keep the ex? term as a main term rather than as a small correction. Thus z must be large. Hence at leading order the —1 term in the equation will be negligible when compared with the z term, i.e. ex?+z =0 withsolution 2 ~ —1/e Hence we are led to the rearrangement of the quadratic To gr o= --+— € € and the iterative process foi fn = ete n with a starting point 2) = —1/e. Iterating once we find gz, = -e!-1 and iterating again . 1 ae? l+e = | {ee A further iteration is needed to obtain the e? term correctly. Expansion method The expansion method can be applied to the singular root by posing a power series in € which starts with an e~! term instead of the usual e°. The way in which one determines the correct starting point is left until later in this section. Thus substituting z(e) = € '2_, +29 +r, +°": 6 1 Algebraic equations into the governing quadratic yields e}n?, + Qi azo + €(2e_qz,+23)+-°: ctr, + Zo + ef, +-°: - il = 0 Comparing coefficients of e”, we have atc: r4,+z2_, = 0, ie. 2 =—lor0 The root z_, = 0 leads to the regular root, so we ignore it. At €°: Qr_,Fot+to-1 = 0, ie. tg = —1 At el: Qr_4z,+22+2, = 0, ie cy =1 where at each stage the values of previously determined x, have been used. Rescaling in the expansion method Instead of starting the expansion with the unusual e~! term, 8 very useful idea for singular problems is to rescale the variables before making the expansion. Thus introducing the rescaling z= X/e into the originally singular equation for z produces an equation for X, X?74+X-e€=0 which is regular. Thus the problem of finding the correct starting point for the expansion can be viewed as a problem of finding a suitable rescal- ing to regularise the singular problem. There is a simple procedure to find all useful rescalings. First one poses a general rescaling with a scaling factor 6(€), xz = 6X in which one insists that X is strictly of order unity as « — 0. Unfortu- nately the standard notation X = O(1) does not describe this limitation on X, because O(1) permits X to be vanishingly small as « — 0. Thus we are forced to adopt the less familiar notation X = ord(1) to stand for X is strictly of order unity as « — 0. Substituting the general rescaling into the governing quadratic equa- tion gives 667X?+6X-1 = 0 1.2 Singular perturbations and rescaling 7 We now consider the dominant balance of this equation for 6 of different magnitudes, starting the search for sensible rescalings with 6 very small and progressing to 6 very large. e 6<€1. If6 is very small, then the left hand side of the equation is €6°X?4+6X—-1 = small +small—1 This cannot balance the zero on the right hand side, and so a small 6 is an unacceptable rescaling. As 6 is increased, it is the X term which first breaks the domination of the 1 term and this occurs when 6 = 1. Hence the range of the unacceptable rescalings when 6 is too small is 6 < 1, as declared above. e 6=1. The left hand side of the equation is now e6?X74+6X—-1 = small+X-1 This can balance the zero on the right hand side to produce the regular root X = 1+small. ° 1<6 6,(€) > 6,(e) >... and £,,29,.-. = ord(1) ase > 0 Substituting into the governing quadratic yields 1 + 26,2, + 62x? + 26,22 + 26,6,%,22 + 6323 + °° — € — 26,2, — 6672? — 26,2 +°-: —2 — 26,2, — 26,7, +--: +1 = 0 While the relative magnitude of some of the terms is clear, e.g. 26,2, > 62x? and 26,2, > 26,2, because 1 > 5, and 6, > 6, respectively, there is considerable uncertainty about the ordering of other terms, eg. be- tween 67x? and 26,2,. Removing the cancelling terms one 1s left with 620? + 26,6,2,2, + 6305 +°°: —e — 26,2, — e672} — 267, +++» = 0 Using 1 > 6, > 6, one can see that the leading order terms from the two lines are 62x? and —e. Therefore there are three possible leading order balances: either 62227 =0 if 67 >e or 62? --=0 if =e or --=0 if &<€e Clearly the last option is unacceptable and so too is the first because we require z, = ord(1). Hence we conclude that 6; =e? and z,=+1 Removing these two balancing terms leaves as leading order terms 26,652,22 and —2¢€6,2,. Repeating the above arguments either 2€?6,2,25 =0 if 6,>€ or 2€? 592 ,2o —2tz,=0 if 6,=€ or —%$2,=0 if 6<«e The only acceptable option is 6,=¢€ and z,=1 (for both x, roots) 1.3 Non-integral powers 11 Because the above determination of the expansion sequence involves some messy intermediate details, in practice one would take two at- tempts at the problem to determine 6, and 6,. First one would sub- stitute z = 1+ 6,2, and find 6, = €?. Then one would substitute z=1+ etx, + 6922 and find 6, = e. Splitting the problem up into stages, one has to consider at each stage less terms of undetermined magnitude. Iterative method Finally the superiority of the iterative method should be noted in cases where the expansion sequence is not known. A suitable rearrangement of the original quadratic is (c-1)? = ex? which leads to the iterative process Zag, = 1+ etx, Starting with Zo = 1, the positive root gives 1+e3 7) and t= l+ett+e Not only is this considerably quicker but, there is also no awkward step like the €7 level in the expansion method which leaves z } undetermined. Exercise 1.3. Find the first two terms of z(e) the solution near 0 of Visin (x + 5) —-l-2+ $2? = —le Exercise 1.4. Find the first two terms for all four roots of ext—z?-274+2 = 0 Exercise 1.5 (Stone). Find the first two terms for all three roots of | ° a: ex +27+(2+e)r+1 = bee? +2? +(2-e)r +1 tt =) 12 1 Algebraic equations 1.4 Logarithms In this section we shall find the solution as « — 0 (through positive values) of the transcendental equation Tet | = ie One root is near x = € which is easy to obtain. The other root becomes large as € — 0 and is more difficult to find. We concentrate on this large root. As the expansion sequence is distinctly unclear, we employ the iterative method. First we observe that if € is small (e < 4 is sufficient) the root must lie between x = In(1/e) (for which ze~* = €In(1/e) > €) and z = 21n(1/e) (for which ze~* = €?2In(1/e) < €). Over this range of «, the x factor merely doubles while the e~* factor falls by an order of magnitude from € to e?. Thus we can view the z factor as varying weakly and concentrate on the rapid variation in the e~* factor. This suggests the rearrangement of the original equation et = < x leading to the iterative scheme Ina, = In(l/e)+Inz, Further, from the above observations it is clear that the root must lie quite near In(1/e) when e is small. Thus we start the iteration from Zo = In(1/e) Then z, = In(1/e)+InIn(1/e) = 1,4+L, where we have introduced the shorthand notation L, = In(i/e) and Ly, = InIn(1/e) L, DT, +in E (1 + 2)] eat = L+l,+ 7 - at Iterating again Ul Tt And again L, L, Li t= 1, +10 b, (4+2+B- a 1.4 Logarithms 13 ig ie ie l+h+ (72 i a L, L 2 L : Sij2i2 tf (Beat) +4(B+-)'+ L, —312+L, 123-3724... = 22 y ROR 2 4, G2 BaF itla+ y+ Gt a + The expansion sequence needed by the expansion method is clearly a tough one to guess. Moreover the iterative method produces more than one extra term from each iteration. The appearance of InIn(1/e) means that remarkably small values of € are required to achieve a good numerical accuracy of the approximate expressions. Usually one hopes for a tolerable agreement with « < 0.5 or at worse € < 0.1. In order for In In(1/e) > 3 however one needs € < 10-9. The table below gives the percentage errors at various e€ for the first five approximations to the large root of our transcendental equation. SSS € fy +ig +ho/L, -4Li/L? +1,/2? ee 107} 3 12 2 4 0.03 1073 24 3 0.02 0.04 0.04 1075 19 1 0.04 0.1 0.001 aaa aac carcass amen ener ee The table shows that acceptable accuracy is only achieved with many terms of the approximation or with extremely small values of «. The table also demonstrates another common feature of expansions which involve In In(1/e). This is that it is unwise to split (-—}13 + L,)/L? into two terms, because the error is made worse by the first part before it is eventually improved by the addition of the second part (at least at values of € not astronomically small). Exercise 1.6. Find several terms in an approximation for the solution of 14 1 Algebraic equations 1.5 Convergence The expansion method offers little opportunity of proving that an ap- proximation converges. In straightforward problems the form of the n** term will be clear, e.g. €", and so one can be satisfied that the expansion is consistent. Just occasionally one can write down the problem for the general n‘* term, find a strong bound on the magnitude of the term, and thence prove convergence of the expansion. In more difficult problems, however, the expansion sequence will not be clear and one would have no idea of the form of the general term. In these problems one can only be satisfied that the expansion is consistent as far as one has proceeded. The iterative method on the other hand provides a simple proof of convergence. Suppose rz = 2, is the root of the equation z= f(z) where f is used in an iterative process z,,,, = f(z,,). Then one iteration will take c = 1, + 6 to f(z, +6) = x, + df'(z,) + 0(6) if 6 is small. Thus one iteration will decrease the error if If'(z.)| <1 Hence by the contraction mapping theorem, the iterative process will converge onto the root z, if |f’(z,)| < 1 and if the iteration is started sufficiently near to the root. (The standard theorem needs a small mod- ification to take account of the truncation of the higher order terms which are known to be incorrect after insufficient iterations.) In the previous sections we had iterative schemes which converge. In§1.1 f=VJl—-ex with z, ~ 1 so f'(x,) ~ —fe In §1.2 f=-—l/e+l/er withz,~-—l/e so f'(z,)~ —e In§13 f=l+el/?%z with z,~1 so f’(x,) ~ €l/? In §1.4 f =In(4)+In(z) with z, ~ In(4) 80 f’(z,) ~ 1/In(2) The negative sign of f' in the first two cases means that the error changes sign and so two successive iterations must bracket the answer. Also from the magnitude of f’ one can work out how many terms will be correct after a given number of iterations. 1.6 Eigenvalue problems 15 1.6 Eigenvalue problems In this section we consider the eigenvalue problem for the eigenvalue A associated with the eigenvector x Ax + €B(x) = Ax In order for this to qualify as an algebraic equation, A ought to be a matrix. The techniques of this section, however, can be applied to any linear operator A with adequate compactness. As €B(x) is a small perturbation, there is no need for B(x) to be linear. We look for the perturbed eigensolution near to a given unperturbed eigensolution with eigenvalue a and associated eigenvector e: Ae = ae If the matrix A is not symmetric, its transpose will have a different eigenvector et associated with the same eigenvalue: e'A = ael Initially we restrict attention to the case where a is a single root with only one independent eigenvector e. Then et is orthogonal to all the other eigenvectors of A. In the standard way we pose an expansion in powers of € starting from the unperturbed eigensolution xe) = e@ + ex, + ex, 4... Me) = a + ey + CA, Fe. Substituting into the governing equation and comparing coefficients of €” produces ate°: Ae = ae which is automatically satisfied ate": Ax, + B(e) = ax, +A,e It is useful to rearrange the last equation as (A—a)x, = \,e- Ble) Now the left hand side of this equation can have no component in the direction of e, because for all xX, el [(A-a)x,] i [ef(A -a)]-x, = (a-ajel-x, = 0 using the eigenvector property of et. Thus there can exist no solution of the equation for x, unless the right hand side of the equation also has 16 1 Algebraic equations no component in the direction of e, i.e. et .[A,e— B(e)] = 0 Thus we have found the first perturbation of the eigenvalue et . B(e) et-e Note that this expression shows that if B(e) is nonlinear then the eigen- value is not independent of the magnitude of the eigenvector. We now can return to the equation for x, and substitute the expression for A, to yield 1 = t. (A-a)x, = —B(e) + eRe. = -B(e), where the notation B(e), has been introduced for that part of Ble) perpendicular to e. Now that the right hand side has no component in the direction of e it is possible to invert (A — a) to obtain a solution for x,, although this solution is not unique because it is possible to add an arbitrary multiple of e to x, without changing (A —a)x,. Thus with k, an arbitrary scalar x, = —-(A-a)*B(e), + ke where the restricted inverse (A—a)~! does exist in the space orthogonal to e. If the complete eigendecomposition of A is known, then x, can be represented as a sum over all the other eigenvectors e!) ’ et . Bie) = net eee gd) ke = De eee + This completes the first order perturbation. Second order perturbation The second order perturbation is governed by Ax, +B, = ax, +A,x, + A2€ Here ¢B, is the ord(e) change from B(e) to B(e + ex,). If B ea then B, = Bx,. If B is nonlinear, then B, = x, - B’(e) where B’ is the first derivative of B. Rearranging the equation for x, we have (A-a)x, = A,e+,x, — B, As in the problem for x,, we must require that the right hand side has no component in the direction of e. This leads to the second perturbation 1.6 Etgenvalue problems 17 in the eigenvalue ef. (By - A1x,) et-e and thence, with k, an arbitrary scalar, the second perturbation in the eigenvector is rg x2 = —(A 7 a 8. os A,X), + ke We see in the expressions for \ and x, that it would have been conve- nient to remove the non-uniqueness in x, by requiring it to have no com- ponent in the direction of the unperturbed eigenvector, i.e. et -x, = 0. In some problems, however, there are more pressing claims than this convenient normalisation. If the complete eigendecomposition of A is known, and also if B is linear, then our result for 4. can be written in the more familiar form X _ ! (et . BeS))(e@t . Be) 2 = 2), @—atneGt eet -s) Multiple roots Suppose that the eigenvalue a of A is associated with more than one independent, non-degenerate eigenvector, €,,€),...,e,. We must now consider perturbing around a general eigenvector in the eigenspace x yi jG; + ex, +:-- A = a + €A, +::- Substituting into the governing equation and comparing coefficients of e” produces at e! (A-a)x, = Ay Din HE; B(Y ina a;€;) In this case of multiple roots, the left hand side can have no component in the eigenspace. Thus requiring the right hand side to have no component in each of the independent directions e; produces Aiq = el -B(S); a,e;) /(et -e;) Aa, = ef -B (X a:e;) /(eh - e,) These equations are a new eigenvalue problem in the eigenspace of A to find the eigenvalue A, and eigenvector a. If B is linear there will exist n eigenvalues and, except in some degenerate cases, n associated indepen- dent eigenvectors. If B is nonlinear, it is possible that no eigensolutions 18 1 Algebraic equations exist. In such cases the original eigenproblem will have no perturbed eigensolutions near the eigenspace of the unperturbed problem. Degenerate roots Degenerate multiple roots can lead to an expansion in non-integral pow- ers of «. Consider the n-degenerate eigensolution in the Jordan Normal Form Ae, = ae, Ae, = ae@g + Coe) Ae, = ae, + C,@,_1 Then if the perturbation «B(e,) has a component in the direction e,, , say €B,,, an expansion is needed in powers of e/™, ie. x(e) e, + "ge, +€7/"2,e, +--+ Viz e, +++: Me) = a + MPA to with solution Il -1 et feel and 2, = (c9c3...¢,B,)™ If the components of €B(e,) vanish in the directions of €,41,€,42)--- 1 en then an expansion is needed in powers of e1/*. Exercise 1.7. Find the second order perturbations of the eigenvalues of the matrix FE, 0 0 w ( 0 B) * (“ 3) for small w and for large w. Consider to first order the 3 x 3 version of this problem. Exercise 1.8. Find the first order perturbations of the eigenvalues of the differential equation y” +Aytey™ = 0 in0 f,(z) is said to converge at a fixed value of z if given an arbitrary € > 0 it is possible to find a number N,(z,€) such that eo Thus an expansion converges if its terms eventually decay sufficiently rapidly. This property of convergence is less useful in practice than one is usually led to believe. Consider the error function Erf(z) = =f en? Now e~* is analytic in the entire complex plane, i.e. it can be expanded in a Taylor series $>>°[(—t?)"/n!] which converges to the correct value with an infinite radius of convergence. Hence one can integrate term by term to form a series for Erf which also converges with an infinite radius of convergence. < ¢ forallM,N>WN, t? )n gentl 2 co Erf(2) Vr Reha 2n+1)n! 5 1 1 49 1 11 a (e- e + 762° — a2" + haz? — gape +) Just eight terms of this series will give an accuracy of 10~5 up to z = 1. As z increases, progressively more terms are needed to maintain this accuracy, e.g. 16 terms to z = 2, 31 terms to z = 3 and 75 terms to z= 5. As well as requiring many terms, some of the intermediate terms are very large when z is large. Thus a computer working with a round-off error of 10~7 cannot give an answer correct to 1074 at z = 3, because the largest term is about 214, and at z = 5 a largest term of 6.6 x 108 leads the computer to ‘converge’ on an erroneous answer of several hundred. A converging expansion with the terms eventually decaying is thus seen 20 2 Asymptotic approrimations to be of limited practical value when some of the truncated sums are wildly different from the converged limit. An alternative expression for Erf at large z can be obtained from Erf(z) = 1-— * ot at te) = 1 | Integrating by parts 0° 00 d(—e-*’) en? 00 9-t? -? dt = —_—_—_- = = - —; dt ii ew at [ 2 2z / 2 and again three more times gives 2 ea 1 1.3 1.3.5 ) = —(1l-- 4+ 55 - Be | +B 2z ( 222 + (2z2)2 (22?) where the remainder can be bounded by 2 ewe) ie fee Ue IRI = [ me < gad], He") = aa8 using t~9 < 2~® in the range t > z. Thus we have proven that as z — 00 2 aed 1 13° i355 _s ) = 1-—= (1-5 + mos - og : OO ac ale afn ( at (2z?2)2 (2z?)8 a This alternative expansion for the error function diverges. The trun- cated series, however, is useful: at z = 2.5 three terms give an accuracy of 10-5, while above z = 3 only two terms are necessary. Our alternative expansion has the important property that the leading term is roughly correct and further terms are corrections of decreasing size. This prop- erty is called asymptoticness. 2.2 Definitions The sum ~ fa(€) is said to be an asymptotic approximation to f(e) (or alternatively an asymptotic representation of f (e)) as « — 0, if for each M< N fe) - O™ fale) fu(©) i.e. the remainder is smaller than the last term included once e is suffi- ciently small. If the sum has this asymptotic property, one writes He ~ VF ase 70 => 0 ase—0 2.8 Uniqueness and manipulation 21 The words asymptotic erpansion are sometimes reserved for the case where one can obtain, at least in principle, an indefinite number of terms (N = oo), although it should be emphasised that worrying about the higher terms as N — oo runs counter to the philosophy of asymp- toticness in which the first term is virtually correct as « — 0. Often the terms f,(€) are powers of € multiplied by some coefficient, ie. f~ x a,€" which is called an asymptotic power series. We have seen examples in chapter 1, however, which require fractional powers of ¢ and also other functions of ¢€ like In(1/e) and In In(1/e). In these cases the asymptotic approximations take the form f ~ py a,,6,(€) using an asymptotic sequence 69,6,,... which has the property 6, ,,/6, — 0 as e€ — 0. Note that in order to keep the 6,(e€) single-valued € has to be restricted to some sector of the complex e-plane. When working with ¢ real and positive, the usual condition, a use- ful class of functions to use for the asymptotic sequence are Hardy’s logarithmico—erponential functions, which are those functions obtained by a finite number of applications of the operators +,—, x,+,exp and log with the restriction that all the intermediate quantities are real. This class of functions has the important property that any two members can be ordered, i.e. it is possible to decide of two members f and g whether one is smaller than the other, f = o(g) or g = o(f), or whether they are of the same magnitude f = ord(g). Exercise 2.1. Why are cos(1/e) and sin(1/e) not logarithmico-expon- ential functions, and why can they not be ordered? 2.3. Uniqueness and manipulation If a function possesses an asymptotic approximation in terms of an asymptotic sequence, then that approximation is unique for that par- ticular sequence: given the existence of an approximation f(e) ~ De a,6,(€) in terms of a given sequence, the coefficients can be evaluated inductively from _ 4 fe) - De a6, (6) starting at ay and proceeding to ay. Note that the uniqueness is for one given asymptotic sequence. Thus a single function can have many asymptotic approximations, each in 22 2 Asymptotic approximations terms of a different asymptotic sequence. For example tanec) ~ e+fe+ 206 ~ sine + }(sine)® + 3(sine)® 5 ~ e cosh(1/2e) + 3 ( = ase— 0 oo ¢” exp(€) + exp(—l/e) ~ » — ase \,0 Here « \, 0 means € tends down to zero through positive values. In this example the two functions have an infinite number of terms the same. Two functions sharing the same asymptotic power series, as above, can only differ by a quantity which is not analytic, because two analytic functions with the same power series are identical. Asymptotic approximations can be naively added (subtracted, multi- plied or divided) resulting in the correct asymptotic expression for the sum (difference, product or quotient), perhaps based on an enlarged asymptotic sequence. Note the need to be able to order the new asymp- totic sequence. If appropriate to the limiting processes, one asymptotic approximation can be substituted into another, although care is needed if an erroneous result is to be avoided. Typically an error occurs when an insufficiently accurate approximation is used in an exponential. For example consider f(z) =exp(z?) ands z(e) =e 1 +e and the appropriate limits « + 0 and z — oo. Then without error f(2(e)) = exp(e7? +246?) ~ exp(e™?) e? (1 +e? + de4 + ---) The poorer, but asymptotic, approximation z ~ e~! however produces the erroneous result exp(e~*) for the leading term for f, which is out by the factor e?. In order to avoid this error, the exponent must be obtained correct to O(1) and not just to leading order. Finally one must remember that cos and sin are exponentials as far as this potential difficulty is concerned. 2.4 Why asymptotic? 23 Asymptotic approximations can be integrated term by term with re- spect to € resulting in the correct asymptotic expression for the integral. Asymptotic approximations cannot however be differentiated in general with safety. The trouble comes in differentiating terms like € cos(1/e) which has a differential with respect to ¢ which is not the expected size O(1) but the much larger O(e—!). Such troublesome terms are not ana- lytic. If f(e) is analytic in some sector of the complex e-plane, one can differentiate term by term in that sector. 2.4 Why asymptotic? In §1.5 an iterative process was shown to produce a convergent expansion when a certain derivative was less than unity. We now examine the different conditions necessary to make an expansion asymptotic. Consider the iterative process based on the equation for f(e) f = g+cAlf] with g(e) a given function and A[f] a given operator which acts on the function f(e). Iterating from fy = g yields f= gteAteA- A+ (LAA: A" +(A-A)- A) +> in which A and its first and second derivatives with respect to f, A’ and A", are evaluated at f = g(e). The iteration can be continued as far as the necessary derivatives of A exist. After N iterations there will be a remainder N Ry(-) = fO-So ef which will be governed by a problem of the form Ry + €By[Ry] = eN*hy in which the operator By and the function hy can be related to the original A and g. To demonstrate that the expansion is asymptotic one needs to prove that, when « is sufficiently small, it is possible to invert the left hand side operator 1+¢By for the particular right hand side eNt+ip Nv» 80 that Ry exists, and further one must show that the resulting Ry is smaller than the last term included, ie. Ry = 0(e%). This condition for asymptoticness that 1+ ¢B, can be inverted as € — 0 for the particular hy should be contrasted with the requirement 24 2 Asymptotic approximations for convergence that |eA’| < 1 for A’ evaluated near g, i.e. lef-A’] < |f| for all f Thus convergence can be lost if the operator A’ is unbounded. An example of an operator which is invertible but unbounded is the differential operator. Consider the differential equation for f (x) f = go} ae f' This equation is of the advertised form f = g + €A[f] as x > oo with =},g=1 and A=z- 4. Iterating one obtains the expansion f~ cota? +2773 42.3274 + 2.3.4275 The problem for the remainder is Ry+Ry = (N+1)!27%-? which can be solved oat dt ae Ry = ke? + (N+ yf _ Thus |Ry| < |kle"? + (N +1)!a-N-?, ie. Ry = 0(a7%-}) as tz on. This proves that the expansion is asymptotic. The factorial in the nu- merator, however, means that the expansion diverges. Exercise 2.2. Find the behaviour as z — oo of f(x) which satisfies if - To find the remainder explicitly (and hence prove asymptoticness), it is useful to know that f is related to the Error function by Erf(z) = 1—- z (; +f) Numerical use of diverging series We have now seen some examples of expansions which are asymptotic and which fail to converge. As explained earlier in this section they are not uncommon in the solution of differential equations. If an expansion is asymptotic, then the leading term is virtually correct once ¢ is sufficiently small. A practical problem arises therefore if the leading term is not sufficiently accurate or if the function is to be evaluated at a value of € which is not sufficiently small. Adding a few extra terms can help, but there is a limit to the number of terms which can be used if the expansion 2.5 Parametric expansions 25 eventually diverges as N — oo at a fixed €. It is obviously not sensible to include extra terms once they stop decreasing in magnitude. See chapter 8 on the improvement of the convergence of series. 2.5 Parametric expansions So far we have only considered functions of a single variable and their approximation. Such problems occur in solving partial differential equa- tions when finding the far field behaviour, and there the approximations are called co-ordinate expansions. For much of this text, however, we will be concerned with functions of two (or more) variables f(z,¢) and their asymptotic behaviour when one of the variables, ¢, is small. Typically f(z, €) will satisfy a differen- tial equation with respect to x in which € occurs as a non-dimensional number or parameter — hence the name parametric expansion. For functions of two variables we make the obvious generalisation of the definition in §2.2 by allowing the coefficients a,, to become functions of the non-limiting variable z, i.e. fae) ~ Sr a,(2)6,(-) ase 0 If the approximation is asymptotic as « — 0 for each fixed x, then it is called variously a Poincaré, classical or straightforward asymptotic approximation. One can ask if the above pointwise asymptoticness is uniform in z. It is not uncommon, however, for there to be an awkward double limit such as x — 0 and € — 0 which requires as x decreases to 0 ever tighter restrictions to be placed on e for it to be sufficiently small, e.g. € < x. In such problems a more general form of approximation is necessary: (oa ees 0 for example with a,(x,¢) = b,(z/e). While the uniqueness theorem of §2.3 extends immediately to asymptotic approximations of the Poincaré type, there is no uniqueness for the more general approximations to functions of two or more variables. 26 2 Asymptotic approzimations 2.6 Stokes phenomenon in the complex plane An analytic function f(e) has a power series which converges. Thus an asymptotic expansion which diverges must involve some non-analyticity, e.g. an essential singularity, and so must be restricted to a sector of the complex plane. There is therefore the interesting possibility of a single function possessing several asymptotic expansions, each restricted to a different sector of the complex plane. This is called a Stokes phe- nomenon. Returning to the error function, we found in §2.1 that exp(—z?) afr Looking now on the complex plane, the contour for the integral f° exp(—t?) dt can be deformed to any complex z with no change in the result so long as the contour is kept in the sector where exp(—z”) — 0 as z — oo. Thus the above result is applicable to the sector | arg z| < $. A similar expression for the quarter plane about the negative real axis follows from the fact that Erf(z) is an odd function of z: Erf(z) ~ l1- as z — oo with z real exp(—z?) z/n An expression for Erf which is valid in the top and bottom quarter planes, where Erf is very large, can be found by evaluating the original integral from 0 to z using a method given in the next section. The result is Erf(z) ~ -1- as z — oo with 3% < argz < 2 mn 3a exp(— 4 00 with 7 : 7 2/u Tr 0, then Watson’s lemma says that one can integrate term by term to produce the asymptotic result A n | e7*T f(z)dz ~ >> TP (oy +1) as T > 00 0 0 Proof: Given an arbitrary « > 0, from the asymptoticness of the ap- proximation to f(z) it is possible to find a z,(€) such that fae k=0 Moreover the complex argument of z, can be chosen to be that of 1 /T so that zy is in S and Tz, is real and positive. Because f is analytic, the path of the integration can be deformed to go radially from 0 to z) and then towards the right from zy to A, all within S. < e€fz*| for all z in S' with |z| < |zo| 28 8 Integrals From the definition of the gamma function I'(z) z0 Co [ ze T dz —T-*"1T(a, +1) = -{ 2k eT? dz 0 Zo Using |e~7?| < |e~(7—1)0|,|e~*| for z to the right of zp, the last integral can be bounded. Thus Zo n eT? f(z) dz— T--1p 1 [ete Ds (a, +1) oe bG 41) oo on Ff Slayer + ele2°0) fem] zo 0 The integral from z to A can also be bounded simply: A [ e~T* f(z) dz x |e“P-00e < Fe7T* with F the bound on |f(z)| along the contour. Finally the exponentially small terms are o(T~°"~!) as T — oo, and € is arbitrarily small, which proves that the approximation to the integral is asymptotic. s Application and explanation In §2.1 we found an asymptotic approximation to the complement of the error function 2 f? as Erfe(z) = =f e* dt zx The substitution t = z +7 puts the integral into the form for Watson’s lemma 2. fe 2 = —=e%* e727 e-T dr va 0 For an alternative derivation, we first note that the integrand drops exponentially in a small region r = ord(1/xr). To concentrate attention on this region we introduce a rescaling as in §1.2, rT = u/s so that Qe-2” tJr Jo Be 22 en te /E dy Erfe(z) = 8.2 Integration by parts 29 Now the significant contributions to the integral come from u = ord(1) when x — oo, so that the second exponential can be expanded 2 2e-= 00 u2 ut us ae ol reece du Evaluating the integral term by term, this is e-* 2 13 1.3.5 ~ ave ( ~ 258 * (aaa)2 ai) The use of the rescaling exposes clearly the importance of the small region, which was effectively hidden in the standard proof of Watson’s lemma. The rescaling also gives an early indication of the magnitude of the correction terms. There is however a lack of rigour in the term-by- term integration, because the range of integration includes u > ord(z) for which the expansion is not asymptotic, in addition to u = ord(1) which gives all the contributions to the integral up to terms of exponen- tial smallness. Exercise 3.1. Use the method of rescaling to obtain an asymptotic approximation to the exponential integral as 1 — oo, co E,(z) = / t"e* dt 3.2 Integration by parts Integrals can sometimes be integrated by parts repeatedly to generate an asymptotic approximation. We have already seen one example in §2.1 with the complement of the error function. The exponential integral in the above exercise can also be integrated by parts. When this method works it has the advantage that it retains an explicit expression for the remainder which can then be bounded to prove the asymptoticness. A further example of the application of integrating by parts is provided by Fourier series. If the function f(x) is periodic with period 27 and has a continuous integrable derivative, then the expression for the Fourier amplitudes can be integrated by parts once: 1 2x fr = = f(x) cos nz dz T Jo : 2x Qn = Sie)einne = f'(z)sinnz dz 0 30 8 Integrals By the periodicity of f the boundary term vanishes, and by the Riemann-Lebesgue lemma the final integral vanishes as n — oo, rather than being the more apparent ord(1) estimate. Hence for a periodic function with an integrable derivative f, = 0(1/n) as N — 00 It is instructive to look behind this result. If f has a derivative then for x in the small interval [z)—2, 29+ =] the function f(z) is roughly equal to the constant value f(z) with a change from this value O(f’ m/n). Mul- tiplying by cosnz and integrating over the small interval, the constant part integrates to zero due to the exact cancellation of the oscillation while the changing part contributes a term O(f’1?/n2) from the small interval. Adding together the n intervals yields f, = O( f'n? /n). An immediate generalisation of the above result is for a periodic func- tion with an integrable k** derivative f,, = o(n-*) as n > oo. 3.3 Steepest descents The various methods of stationary phase, Laplace, saddle points and steepest descents are all basically the same when viewed as contour integrals on the complex plane, and all are closely related to Watson’s lemma. The problem is to evaluate the asymptotic behaviour of an integral of the form [ ef (4) g(t) dt as real, positive z — 00 c with f and g analytic functions of t and C a given contour on the complex t-plane, most often from infinity in one direction to infinity in another direction. Global considerations We start by attempting to estimate the order of magnitude of the inte- gral. When z is real, positive and large the integrand is largest where the real part of f is largest along C. It is therefore useful to contour the function Re(f) on the complex t-plane to find where it is largest. Now one of the Cauchy-Riemann conditions on the analytic f is V? (Re(f)) = 0 3.3 Steepest descents 31 Hence Re(f) can have no maxima or minima (except at singular points or branch points where f is not analytic), and so the gradient of Re(f) can only vanish at saddle points. We are thus led to the canonical picture of figure 3.1 for Re(f). The integration contour C must start and end where Re(f) < 0 in order for an infinite integral to converge. Let zRe(f) attain its maximum on C at ty and sustain its maximum over a range At, about tj. (At this stage it is not necessary to be precise about the definition of the width of the maximum Ato, although it can readily be found as [—zRe(f’(tg))]"?.) The order of magnitude estimate for the integral [ eM Og(t)at = O(c g(t,)Atp) proves to be wildly wrong. The trouble comes from the imaginary part of f, whose influence has been ignored so far. Now when z is large, zIm(f(t)) varies rapidly within Aty of ty, and so the integrand oscillates rapidly. A nearly complete cancellation in the integration, as occurred with the Fourier amplitudes in §3.2, nullifies the initial order of mag- nitude estimate. That this first estimate is too high can be seen by deforming the contour ~ see figure 3.1 - from C to C, (which does not change the value of the integral if f and g are analytic) to produce the ee Fig. 3.1 Contours of Re(f) on the t-plane. The continuous curves are for Positive values and the dotted curves for negative values. 32 8 Integrals lower estimate O (e*/(9(t, At) which is also in error. The integration contour can be pushed further and further down the ridge, producing progressively lower estimates, until the lowest crossing of the ridge is reached at the saddle point t = t,, with the estimate O (e*f)9(t,)At,) To demonstrate that this estimate is genuine and that the rapid oscilla- tions from the imaginary part no longer cause enormous cancellations, consider the path of the steepest ascent up to the saddle point and the steepest descent away from the saddle. The path is defined to be every- where parallel to VRe(f). By a Cauchy-Riemann condition, the path is therefore perpendicular to VIm/(f), i.e. the imaginary part of f, Im(f), is constant along the path. There is thus strictly no oscillation of the integrand along the path of steepest ascent and descent, and so the order of magnitude estimate is good. In practice it is not necessary to use the path of steepest ascent and descent: any path descending away from the saddle will produce an integral which converges locally to the correct answer. Analytic methods of evaluating the integral are usually independent of the path, while numerically it is often more expensive to find the steepest path in the vicinity of the point where VRe(f) = 0 than it is to integrate down an arbitrary descending path, which will require a slightly longer range to resolve the slower than steepest decay and also require slightly smaller step sizes to resolve the modest oscillation. The final global consideration is to decide which is the highest saddle point through which the integration contour must pass, as it progresses across various ridges from one fixed end point to the other. This highest saddle will dominate the integral up to terms of exponential smallness as z —+ oo. Note that there can be higher saddles through which the contour does not pass, these saddles leading to valleys isolated from the fixed end points. Local considerations We now calculate the leading order contribution to the integral from a saddle point at t = t,. At the saddle f’ = 0. We assume initially that 3.3 Steepest descents 33 f"(t,) #0. Then near to the saddle zf(t) = zf(t,) + g2(t—t,)?F"(t,) + belt -—t, 9 f'"(t,) + --- Along any descending path the term z(t —t,)*f"(t,) has a negative real part, and when exponentiated it leads to the integrand dropping by a factor of 10 before |t — t,| > 4|zf”|"7. This suggests a rescaling of the important small region as z — oo, t= 27ir Then at fixed r as z — 00 zf(t) ee) ee ee) g(t) = g(t.) + O(z74) The t-integration across the saddle becomes a line integral for 7 in a direction with Re(r?f”(t,)) < 0. The range of the 7 integration is to some large number like 4|f”(t,)|"?, by which point the integrand has dropped in value by a factor of 10°. We write this large number as ‘oo’, although it corresponds to a small value of t as z — oo. Thence ll ef g(t) dt saddle t, ‘ ; oo’ et(tsdg(t,) (25) (a a o(z-4)) Higher order asymptotic approximations can be obtained by retaining the correction terms in zf(t) and g(t), as will be shown in the examples below. Doubts about the justification for the range of the r-integration can be dispelled by invoking Watson’s lemma. Example : Stirling’s formula Evaluating the integral representation of the factorial function oo co ZS [ Ve 'dt = [ eZ int—t gy 0 0 as real, positive z — oo provides an example of the case of entirely real quantities where the method of steepest descents is usually called Laplace’s method. As a function of t, the integrand is largest where (zInt — t)’ = 0, ie. at ¢ = z. The simple maximum of the integrand 34 8 Integrals has a width (the distance over which the integrand drops by a factor of order unity) [—(zInt — t)"]-1/? = 21/2. This is large as z > 00, but is small compared with the distance z to the end point of the integration. With the rescaling t= 2z+2!r the exponent becomes zint—t = zinz—z+zIn(1+ z~tr) —ztr = zinz—z- 47? 4427973 —hz-tyt ye... Breaking the exponential into a product of exponentials, and expanding the exponential of the small terms for r fixed as z > 00 'Q9’ tee ermrcs f et” (1+ (ge7be? ~ gevtet +...) i 2 +3 (g273r9 +.) te )ebar ~ athe Vin (1 + b77') as z — 00 This asymptotic approximation for large z is remarkably accurate, with a relative error of less than 10~° down even to z = 1. Finally note in this example the minor generalisation of the method to saddle points which move in the complex t-plane as z varies. Example : Airy function The first Airy function can be defined by an integral ee tz— 40° Ai(z) = Omi Ee dt with the contour C starting from oo with argt = —3n and ending at oo with argt = 2n. First we find the asymptotic behaviour for z real, positive and large. Figure 3.2 shows the contours of the exponent Re(tz — $t°) on the complex ¢-plane. There are two saddle points at t= +21/?, where tz — 343 = +3z5/?, It is necessary only to go through the lower, left hand saddle in order to go over the ridge separating the fixed end points of the integration. The width of the peak of the integrand at t = —z1/? on C is [—(tz — 343)"]-1/2 = 2-1/22-1/4, With the rescaling t= —2/2 4 2-1/4, 3.3 Steepest descents 35 SO (= Fig. 3.2 Contours of Re(tz — 3°). The continuous curves are for positive values and the dotted curves for negative values. the exponent becomes tz — 508 = 329/24 7? — 12 -3/478 Breaking the exponential into a product of exponentials and expanding the exponential of the small terms for 7 fixed as z — oo 2 3/2 Goo! 22 Ai(z) = . Ont [ en (2 ~ $275/473 4 pz 8/778 + +) 27/4 dr —t00 eW 8”? 5 4-3/2 ~ Figg t/a (1- & ) as z— 00 When z becomes complex the two saddles move in the complex t- plane, as shown in figure 3.3. To go over the ridge between the two fixed end points of the integration, it is necessary just to go through the same ‘left hand’ saddle up to argz = 7. Thus the above asymptotic approximation is valid in | arg z| < 7. Note that this saddle is the larger in the range in 00 36 8 Integrals arg (2) = 71/4 XS ( arg (z) = 37/4 Y Fig. 3.3 The changing contour C for Ai when z is complex. 8.4 Non-local contributions 37 Beyond arg z = 77 it is necessary to switch to the second saddle, which is higher in the range m7 < argz < Sn. The last asymptotic approximation for arg z = 7 is in fact asymptotic in the sector |argz — | < 2m. Thus we have an example of the Stokes phenomenon with different asymptotic approximations applicable to different sectors of the complex z-plane; in this case the sectors overlap. Exercise 3.2. Find the asymptotic behaviour of K (z) = rf evt—zcosht gy v 2 eS for real and positive v and z with z = ord(1) and v — oo. Exercise 3.3. Find the asymptotic behaviour of 1 coin Ppa oa vzsinht—vt gy J, (vz) a i e for real vy and z as v — oo with first 0 < z < 1 and second z = 1. (The case of z = 1 has a cubic saddle where three ridges meet and f” = 0.) Exercise 3.4. Find the asymptotic behaviour of 1 (t? — 1)" Piz) = gnt+l qj [ (t — z+} dt with the contour C enclosing t = z, for real z and n with n — oo, first for 0 < z < 1 and then for z > 1. 3.4 Non-local contributions The integrals so far have had the form of Watson’s lemma in which all the terms in the asymptotic expansion up to those of exponential smallness come from a small region. This need not be the case: at leading order, as in the first example below, or in some higher order correction, as in the second example below, the entire range of the integration can contribute significantly. Terms crucially involving the whole range of integration will be called global contributions in contradistinction to local contributions from a small region. Example 1 Consider the simple integral 1 [ (e+a)~/7drz with exact result 2 (a + )i/? — e/?) 0 38 8 Integrals We estimate the contribution to the integral as the magnitude of the function multiplied by the width of the region, for the region near x = 0 where x = ord(e) and for the majority of the range of integration outside this small region. Ifz=ord(e), (€+2)~!/? =ord(e~/?) with f = ord(e/?) If z=ord(1), (e€+2)~/? = ord(1) with f = ord(1) From these estimates we can conclude that the leading order term is a global contribution, for which the integrand can be approximated by z~1/2 and the range of integration is to 1 from a small value outside x = ord(e) which we write as ‘0’. Thus 1 1 [eraras ~ [otas ee) a “o To obtain a correction to this leading approximation, we subtract off from the original integrand a function whose integral is known exactly and which is equal (or at least asymptotic) to the leading order term. Thus with no approximation 1 1 [ (+n) 8dr = 2+ [ ((e+2)-¥? — 27 ¥/?) dz 0 0 The contributions to the new integral are now estimated. If =ord(e), integrand = ord(e~!/?) _—_ with f = ord(e!/?) with f = ord(e) Hence the major contribution comes from the small € region near x = 0. To examine the small region it is useful to introduce a rescaling x = €£, so that € = ord(1) as € + 0 [ ((e+2)-¥? -2-¥?) dr ~ an | (Q+e-¥? - e747) Pe OE It is difficult to proceed to higher terms with further subtractions. If zg =ord(1), integrand = ord(e) Example 2 Consider the integral fe do eer (4 + =| 0 ——=— _ with exact result ———— e? + sin? 0 eV1 +e € 8.4 Non-local contributions 39 We estimate the contributions as in the previous example. If @=ord(e), integrand = ord(e~?) with f = ord(e~*) If 9 =ord(1), integrand = ord(1) with f = ord(1) Hence the leading order term is the local contribution, which can be evaluated using the rescaling 6 = ew (so that the leading contribution comes from u = ord(1) as € — 0) ie do tr edu T 0 €2 +s8in?6 o «CF tur tO The next term is a global contribution. This can be seen by making a subtraction with ™/4 6p 1 [ oo (z) Alternatively one can note that the correction term to the integrand in the small region where 6 = ord(e), 1 1 e2 + sin? 6 e? + eu? — detut +... - ift_,_eut @\it¢w 3i+w) _diverges when integrated from u = 0 to u = ‘oo’, with of course ‘oo’ = m/4e. The major part of this ‘divergence’ comes from outside u = ord(1), which indicates the importance of the whole range of integration to the correction term. While the method of subtracting off the leading behaviour can be used to obtain one correction term, it becomes cumbersome at higher orders and the alternative method below is easier and more systematic. The idea of subtracting off the singular behaviour of an integral in a small region is, however, frequently used in the numerical evaluation of such integrals, with the subtraction being separately evaluated, usually analytically. Summing a split range of integration An alternative method for evaluating integrals with both local and global contributions is to split the range of integration into two at some point 6 = 6 which is large compared with the small region but small compared with the whole interval, i. ¢ < 6 < 1. Asymptotic approximations to the integral over the two ranges are then found separately, using an 40 8 Integrals appropriate rescaling for the small region. When the two parts of the integral are finally combined the result should be independent of the artificially introduced 6, which provides a useful way of detecting slips in algebra. In order to keep a check on the different errors in the approximations to the integrals over the two ranges, the errors being differently small in the small 6 and smaller e, it is useful to tie 6 to € as « — 0, in a way consistent with e < 6 <1. This is sometimes called gearing 6 to «. In this example we consider 6 = ord(e!/?). Note that it is not necessary to set 6 = e!/2 precisely, and that there are advantages to the self-checking feature in not being so precise. To approximate the integral over the small range from 0 to 6 we use the earlier rescaling 9 = eu and expand sin@ for small 0 as@ <6 <1. 6 dé 1 é/e 1 ut - ») @+sno € y2 ’ d [ e2 + sin? 6 =f (seat aaa 6 e“u’) } du The substitution u = tan enables one to evaluate the integral exactly 1 OO eo ee ce ore)| = = [tan ny . 5 tan A ee + O(€6") We now expand all the terms for large 6/e, using tan7! 6/e ~ 7/2—€/6+ e3/363. Collecting together terms of a similar order, with 6 = ord(e!/?), au e 6\ me 3/2 = 2-. <+2)-™+0 2e 5 +0+(s5+5) g Ole”) These terms decrease in powers of e!/? from e~! to 3/2, To approximate the integral over the remainder of the range from 6 to /4, we expand (? + sin? 6)~! for small € as 0 > 6 > e. x /4 dé n/4 1 «2 4 ) a ee fo ))\ 20 [ e2 + sin? 6 [ (= 6 sin*@ (58 ) The integral can be evaluated exactly 4 7 cot d—1— et (eee 4 20086 3) +9 (5) 3sin°6 | 3sind 3 rs We now expand all the terms for small 6. Collecting together terms of a similar magnitude, with 6 = ord(e!/), we have 1 é? 6 3/2 = 5-1~ (stg) t9<+ 0% ) These terms decrease in powers of e!/? from e~1/? to €9/?. 3.4 Non-local contributions 41 Bringing together the approximations for the two parts. of the range of the integration, we find that the terms involving 6 cancel and so n/4 dé fs T sooo = er 1 - ee + (68? [ e2 + sin? 6 2€ ri (e") Looking back, we can see that the leading order local contribution is followed by a global first contribution and then a local second correction. Logarithms There is a curious intermediate case in which the dominant contribution is neither truly local nor truly global. Consider evaluating the integral [seo with a special integrand which has a power law dependence ine Xz <1 and which maintains the order of magnitude at the ends of this range, ie. O(e7®) in x = ord(e) Jo) 7 ine 1, the integral is dominated by the small region x = ord(e), i.e. a local contribution, e.g. 60! [ dz t dz o (€+2)3/2(1+ 2) o 6 (e+ 2)3/2 e Ifa = 1, neither x = ord(e) nor x = ord(1) wins, but instead the dominant contribution comes from the large gap in the orders of magni- 42 8 Integrals tude between xz = ord(e) and ord(1) with a value In(1/e). The two ends contribute slightly smaller O(1) corrections. E.g. = dz [ (e+ 2z)(1+2) (The exact answer is In(1/e)/(1 — €).) In this case the leading order contribution requires little effort to evaluate. The correction terms from the two ends, however, are only O(1/ In(1/e)) smaller and so they usually have to be evaluated, unless € is extremely small. The above results for power law z~® integrands in the range e < x <1 hold equally for the slightly more general integrands of the form z~%(Inz)®, with the dominant contribution depending only on whether a>or=ore we may approximate the distance to a point on the surface [e?R? (z) + (2 — 2')?]!/2 by the axial separation |z’ — z|. Hence q(z’; €) q(z;€) ' SEE i. for €< jz —-z < 1 €? R?(z) + (z — 2/)? jz’ -2| | | This integrand is an example of the intermediate case a = 1 of the previous section. So long as z is not within ord(e) of the ends z = +1, there will be two logarithms, one from the 2'-integration on each side of z, i.e. PER) 2) ~ BUD 2:6) + 0(@ The O(q) correction comes from both |z! — z| = ord(1) and ord(e). Imposing the equipotential condition on the surface, » = 1, we obtain the leading order approximation to the pole strengths an 1 "0 = Bay +? (mare) To proceed to higher approximations we pose an expansion for g in powers of [In(1/e)]~* starting from the known leading order, ' q, (2) 92(z) 93(2) _ a(zje) ~ in(i/e) int eye ¥ fia /or

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy