0% found this document useful (0 votes)
87 views31 pages

Numerical Methods For Partial Differential Equations

numerical methods for partial differential equations for basics

Uploaded by

Jack snow
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
87 views31 pages

Numerical Methods For Partial Differential Equations

numerical methods for partial differential equations for basics

Uploaded by

Jack snow
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 31
1 2. Background Mathematics 1. 12 13 14 15 Contents Introduction .. : Vector and Matrix Norms Gerschgorin’s Theorems . Iterative Solution of Linear Algebraic Equations Further Results on Eigenvalues and Eigenvectors : Classification of Second Order Partial Differential Equations 16 Finite Differences and Parabolic Equations ...............+. 29 21 Finite Difference Approximations to Derivatives 29 2.2 Parabolic Equations viii 8 23 Local Truncation Error 41 24 Consistency.....eseeecccceeieseseseeesestetentesecieses 48 25 Convergence . 46 26. Stability... viernes 8 2.7 The Crank Nicolson Implicit Method 54 28 Parabolic Equations in Cylindrical and Spherical Polar Coor- AINALES eee cette ete tetetttiettteeeeeees 89 Hyperbolic Equations and Characteristies 6 3.1 First Order Quasi-linear Equations ... fone 67 3.2 Lax-Wendroff and Wendroff Methods .....2..0..0.0ccceccse 72 33 Second Order Quasi-linear Hyperbolic Equations 80 3.4 Rectangular Nets and Finite Difference Methods for Second Order Hyperbolic Equations ......2.2...ssececeeeeeeeeees 90 Elliptic Equations cote tte teteeetereees 98 41 Laplace's Equation 00.0.2. cece cccsececeeeeeeeeeee 9B xi ‘Numerical Methods for Partial Differential Equations 4.2. Curved Boundaries 4.3. Solution of Sparse Systems of Linear Equations. 5. Finite Element Method for Ordinary Differential Equations. 5.1 Introduction cecseesee cee 5.2. The Collocation Method 5.3 The Least Squares Method 5.4 The Galerkin Method : 5.5 Symmetric Variational Formulation 5.6 Finite Element Method 5.7 Some Worked Examples ....... 6. Finite Elements for Partial Differential Equations 6. Introduction ve voces 6.2 Variational Methods . 63 Some Specific Elements 64 Assembly of the Elements 65 Worked Example 66 A General Variational Principle 6.7 Assombly and Solution ..... 6.8 Solution of the Worked Example 6.9 Further Interpolation Functions : 6.10 Quadrature Methods and Storage Considerations - 6.11 Boundary Blement Method ve A. Solutions to Exercises ....... References and Further Reading ... Index... 104 -- 113 123 123 124 17 o 129 132 4 158 165 165 -- 167 170 218 “177 2182, 183 184 190 200 ~- 202 207 287 289 1 Background Mathematics 1.1 Introduction ‘The companion volume of this book, ‘Analytic methods for partial differential equations’, is concerned with solution of partial differential equations using classical methods which result in analytic solutions. These equations result ‘when almost any physical situation is modelled, ranging from fluid mechanics problems, through electromagnetic problems to models of the economy. Some specific and fundamental problems were highlighted in the earlier volume, namely the three major classes of linear second order partial dliferental equations. The heat equation, the wave equation and Laplace’s equation form a basis for study from a numerical point of view for the same reason. as they did in the analytic case. That is, the three equations are the canonical forms to which any quasi-linear second order equation may be reduced using the characteristic transformation. The history of the numerical solution of partial differential equations is much more recent than the analytic approaches, and the development of the numerical approach has been heavily influenced by the advent of high speed ‘computing machines. This progress is still being seen. In the pre-computer days, the pressures of war were instrumental in forcing hand-worked numerical solutions to problems such as blast waves to be attempted. Large numbers of electro-mechanical machines were used with the ‘programmer’ controlling the machine operators. Great ingenuity was required to allow checks to be made ‘on human error in the process. The relaxation method was one of the results of these processes. ‘Once reliable and fast electronic means were available, the solution of more 1 Numerical Methods for Partial Differential Equations and more complex partial differential equations became feasible, The earliest tethod involved discretising the partial derivatives and hence converting the pattial differential equation into a difference equation. This could either be solved in a step-by-step method as with the wave equation, or required the solution of a large set of sparse linear algebraic equations in the case of Laplace’s equation. Hence speed was not the only requirement of the computing machine, Storage was also crucial. At first, matrix blocks were moved in and out of backing store to be processed in the limited high speed store available. Today, huge storage requirements can be met relatively cheaply, and the progress in cheap high speed store and fast processing capability is enabling more and more difficult problems to be attempted. Weather forecasting is a very well known area in which computer power is improving the accuracy of forecasts admittedly now combined with the knowledge of chaos which gives some degree of forecast reliability. In the chapters which follow the numerical solution of partial differential equations is considered, first by using the three basic problems as cases which demonstrate the methods. The finite difference method is considered first. This is the method which was first applied in the early hand-computed work, and is relatively simple to set up. The area or volume of interest is broken up into a grid system on which the partial derivatives can be expressed as simple differences. The problem then reduces to finding the solution at the grid points ‘as a set of linear algebraic equations. Hence some attention is paid to solving linear algebraic equations with many elements in each row being zero. The use of iterative solutions can be very effective under these circumstances. ‘A number of theoretical considerations need to be made, First, it needs to be established that by taking a finer and finer grid, the difference equation solution does indeed converge to the solution of the approximated partial differential equation. ‘This is the classical problem of accuracy in numerical analysis. However real computers execute their arithmetic operations to a finite word length and hence all stored real numbers are subject to a rounding error. ‘The propagation of these errors is the second main theme of numerical analysis in general, and partial differential equations in particular. This is the problem of numerical stability. Do small errors introduced as an inevitable consequence of the tse of finite word length machines grow in the development of the solution? ‘Questions of this sort will be considered as part of the stability analysis for the methods presented. There will be exercises in which the reader will be encouraged to see just how instability manifests itself in an unstable method, ‘and how the problem ean be circumvented. Using finite differences is not the only way to tackle a partial differ- ential equation. Tn 1960, Zienkiewicz used a rather different approach to structural problems in civil engineering, and this work has developed into completely separate method of solution. This method is the finite element method (Zienkiewicz, 1977). It is based on a variational formulation of the pattial differential equation, and the first part of the description of the method 1._Background Mathematics 3 requires some general ways of obtaining a suitable variational principle. In many problems there is a natural such principle, often some form of energy conservation. The problem is then one of minimising an integral by the choice ofa dependent function. The classic method which then follows is the Rayleigh Ritz method. In the finite element method, the volume over which the integral is taken is split up into a set of elements. These may be triangular or prismatic for example. On each element a simple solution form may be assumed, such as a linear form. By summing over each element, the condition for a minimum reduces to a large set of linear algebraic equations for the solution values at key points of the element, such as the vertices of the triangle. Again the use of sparse linear equation solvers is required. ‘This first chapter is concerned with some of the mathematical preliminaries which are required in the numerical work. For the most part this chapter is quite independent of the equivalent chapter in the first volume, but the section ‘on classification reappears here for completeness. 1.2 Vector and Matrix Norms ‘The numerical analysis of partial differential equations requires the use of vectors and matrices both in setting up the numerical methods and in analysing their convergence and stability properties. There is a practical need for mea- sures of the ‘size’ of a vector or matrix which can be realised computationally, as, well as be used theoretically. Hence the first section of this background chapter deals with the definition of the norm of a vector, the norm of a matrix and realises some specific examples. ‘The norm of vector x is real positive number, |[x||, which satisfies the (i) IIxl>0 if x40 and |ixl|=0 if x=0; (ii) ||exl] = lel |[x|| for a real or complex scalar ¢; and (ii) Ibx-+ Yl < Ill + Ill If the vector x has components 21,...,¢y then there are three commonly used norms: (i) The one-norm of x is defined as bella = fea] + laa] +--+ + [en Die (21) 4 Numerical Methods for Partial Differential Equations (ii) The two-norm of x is » Ulla = (len? + [aal? +--+ + enl?)? [E-"| . (1.2.2) (ii) The infinity norm of x is the maximum of the moduli of the components UPeao = max zl (123) Ina similar manner the norm of a matrix A is a real positive number giving a measure of the ‘size’ of the matrix which satisfies the axioms (i) |All] >0 if AZO and ||Al|=0 if A=O; (ii) {JeAl] = fel [Al] for a real or complex scalar ¢; (iii) ||A + Bll < [Al] + [|BI); and (iv) ||AB|| < |IAl|1B1L Vectors and matrices occur together and so they must satisfy a condition ‘equivalent to (iv), and with this in mind matrix and vector norms are said to be compatible or consistent if [|Ax|| < [All lll], Wz € R"(or C*). ‘There is a class of matrix norms whose definition depends on an underlying vector norm. These are the subordinate matrix norms. Let A be an nxn matrix and x € $ where $= {(n x1) vectors = xl] = 1}; now in general ||Ax|| varies as x varies (x € S). Let xo € $ be such that ||Ax]| attains its maximum value. Then the norm of matrix A, subordinate to the vector norm |||, is defined by IAll = IAxoll = max, | Axl (1.2.4) ‘The matrix norm that is subordinate to the vector norm automatically satisfies the compatibility condition since, if x = x; € S, then [Axil < [Axo] ‘Therefore ||Ax|| < ||ll|[xl| for any x © R". Note that for all subordinate matrix norms 1A] = JAI] [Peal] since pel] IMI = max, [x{] = 1. (1.2.5) ‘The definitions of the subordinate one, two and infinity norms with |fx|| = 1 lead to: 1. Background Mathematics 5 ‘¢ The one norm of matrix A is the maximum column sum of the moduli of the elements of A, and is denoted by ||All:- ‘¢ The infinity norm of matrix A is the maximum row sum of the moduli of the elements of A, and is denoted by ||Al}oo. + The two norm of matrix A is the square root of the spectral radius of A! A where A” = (A)? (the transpose of the complex conjugate of A). This norm is denoted by ||Allo. The spectral radius of a matrix B is denoted by p(B) and is the modulus of the eigenvalue of maximum modulus of B. -(21 en AMAnatan (10-7 an (G4) en atanatan(2 hhas eigenvalues 14.93 and 0.067. Then using the above definitions [All =14+3=4, |[Alloc = 3 +2=5, |JAllp = V14.93 = 3.86. Hence for example if Note that if A is real and symmetric AM =A. and |/Alla = [o(A?)]} = [0°(A)]} = (A) = max|Ash A number of other equivalent definitions of ||A||2 appear in the literature. For example the eigenvalues of A¥ A are denoted by 0?,03,...02 and the o are called the singular values of A. By their construction the singular values will be real and non-negative. Hence from the above definition HAlle =o where For a symmetric A, the singular values of A are precisely the eigenvalues of A apart from a possible sign change, and [lAll2 = Pals ‘where A; is the largest absolute value eigenvalue of A. A bound for the spectral radius can also be derived in terms of norms. Let A; and x, be corresponding eigenvalue and eigenvector of the n x n matrix A, then Ax; = Axx, and [Axl] = [Accel] = [Add [peal For all compatible matrix and vector norms Vaal Ila] = [|All < |All [bell ‘Therefore || < ||Al|, i= 1(1)n. Hence p(A) < |All for any matrix norm that is compatible with vector norm. A few illustrative exercises which are based on the previous section now follow. Numerical Methods for Partial Differential Equations EXERCISES 1.1 A further matrix norm is ||Alle or the Buclidean norm, defined by WAI = oad, Prove that [lAll2 < [lAlle | Beane Py + lage| <1. (1.3.2) Now P, is the sum of the moduli of the elements of A in the sth row (excluding a,,), and ayy may be positive or negative. Hence inequality (1.3.2) is equivalent to (1.3.3) Diasl 0), (1.418) ‘Thus r, the number of iterations required to reduce to E the error in each component of x" is inversely proportional to — In . How do we know when to terminate the iteration? Realistically, our only measure is to test Arex! Now xEX FATE AMH EL ATEY.. and ef = Ae! = et orm Ale — or XAT xP A(T xt) Ane AT? ‘Thus, for sufficiently large r, x ex + ALE +A + (1.4.19) 16 Numerical Methods for Partial Differential Equations It follows that if we are to expect errors no greater than E in the components of x” we must continue our iterations until at max en Ja uo dled la Experiment with values of w slightly away from the optimum to show the sensitivity of the convergence rate to the w value used. 1.15 Consider the matrix 2 1 ny 1 1 0 zm] _ {1 0 -1 | Jas|=]a 0 ud Ley 1 which differs from the one in Exercise 1.14 by just the element (1,4), Apply the SOR iteration to this matrix to see how much the change of one element affects the optimum w. The new matrix is not tri- diagonal so the theorem of Exercise 1.14 does not apply. 18 Numerical Methods for Partial tial Equations 1.5 Further Results on Eigenvalues and Eigenvectors In this section various results and proofs concerning eigenvalues and eigenvec- tors are collected together. These results are used freely in the following chap- ters. Let a square matrix A have eigenvector x and corresponding eigenvalue Dy then Ax = x. Hence A(Ax) = A?x = AAx = 2x (1.5.1) resulting in A? having eigenvalue \? and eigenvector x. Similarly Area Mx, p84... (152) and A? has eigenvalue A? and eigenvector x. ‘These results may be generalised by defining F(A) = ap A? + ap ,AP? +--+ + al ‘This is a polynomial in A when op,....a9 are scalars. Then, $(A)x = (apd? +--+ a0) = F(A (153) and f(A) has eigenvalue f(A) and eigenvector x. More generally we have the following simple theorem. Theorem 1.4 The eigenvalue of [fi(A)|-1fo(A) corresponding to the eigenvector x is 2(A)/Fi(d), where f(A) and f(A) are polynomials in A. Proof We have Sle = fix, fal) = fal) Pre-multiply by [fi(A)]-? to give [ACAI AAs = LAA dx and hence [ACA x = Lax and (ACA) fa A)x = OANA (A) x. Eliminating [f1(A)}x gives a) - FAC (iA (Al = Gy 1. Background Mathematics 9 Similarly the eigenvalue of fo(A)[fi(A)]~* corresponding to the eigenvector x is Fo 0/0) ‘The second set of results concerns the eigenvalues of an order m tridiagonal matrix and forms the next theorem. Theorem 1.5 ‘The eigenvalues of the order n tridiagonal matrix ab cab ea cab are sm Ay=a42[vbe) cos, 5 =101)n (15.4) where a, b and c may be real or complex. This class of matrices arises commonly in the study of stability of the finite difference processes, and a knowledge of its eigenvalues leads immediately into useful stability conditions. Proof Let A represent an eigenvalue of A and v the corresponding eigenvector with components v1, 02,...,0q- Then the eigenvalue equation Av = Av gives (ad) tiv. = 0 on + (a—A)ua+bu3 = 0 uj (aA tbo = 0 nt (aon = 0. Now define vp = tp1 = 0 and these n equations can be combined into one difference equation eujaa + (a~ Avy + byj41 =O, = 1,.-ym (1.5.5) ‘The solution is of the form vj = Bm} + Cm} where B and C are arbitrary constants and mj,my are roots of the equation C+ (aA) ttm? (1.5.6) 20 Numerical Methods for Partial Differential Equations Hence the conditions uy = ns =0 ive 0=B+C, and 0= Bmp! +Cmst! (a) From (1.5.6), mjm2 = c/b and m, + m2 = —(a — d)/b. Hence £)} eit and ma = ($)' es which implies which gives Asa bm +m) =040(S)} (otf +e Hence the n eigenvalues are de = a+ 2Vb0c08 Rep Shen (1.5.7) as required, ‘The jth component of the corresponding eigenvector is So the eigenvector v, corresponding to A, is t= [Qs Oey (amazon (Qing) ose As an example consider the tridiagonal matrix rr ile 1. Background Mathematics a of order n —1 with a=1-2r 5b ‘Then the previous theorem tells us that the eigenvalues are year A = =a +2r(2)) cont 12 [toe = 1 4rsin? Many of the methods which arise in the solution of partial differential ‘equations require the solution of a tridiagonal set of linear equations, and for this special case the usual elimination routine can be simplified. The algorithm which results is called the Thomas algorithm for tridiagonal systems, and is described below. ‘Suppose that it is required to solve by =e z 4, a2 by en 2 & a3 by a |_| & Ant Paar Cra | | naa dy an by tn dn The algorithm is based on Gauss elimination. In each column only one sub- diagonal element is to be removed. In each equation b; and dj, i = 2,...,n, change as a result of the elimination. Denote the quantities that replace b, and d; by ay and s, respectively. For convenience set a; = b, and s; = di then sata an? a = ba + ete In general sini (1.5.9) Once the elimination is complete the zi, ¢ +n, are found recursively by back substitution. The complete algorithm may be expressed as: mo = by = di, a = 4-4 8 n 1 Sn (si estiss) a 2 Numerical Methods for Partial Differential Equations Conditions for the applicability of the method are considered next. We have not used partial pivoting and so we need to investigate the conditions for which the multipliers a,/ +n, have magnitude not exceeding unity for stable forward elimination and /as,_ i= 2....n—1, have magnitude not exceeding unity for stable back substitution Suppose that a; > 0, b; > 0, cj > 0 then, (i) assuming that by > ay Fei f= stable; and (Gi) assuming that b; > a; +64, 4 = 1,...,n—1, the back-substitution is stable, ‘The proof can be found in Smith (1978). ‘Some assorted exercises on these ideas are now presented. n= 1, the forward elimination is EXERCISES 1.16 Use the characteristic polynomial directly to confirm that the eigen- values given in (1.5.4) are correct for n = 2. L.17 Find the characteristic polynomial and hence the eigenvalues of the matrix 410 241 024 and compare the result with the formula (1.5.4). 1.18 Use the Thomas algorithm to solve the tridiagonal set of equations 410 1 24 1}x=|2]. o24 3 1.19 By counting operations establish that Gaussian elimination requires the order of n?/3 multiplication and division operations. This is a measure of the work load in the algorithm. The easiest way to establish this result is to code up the algorithm (which will be a useful tool for later anyway) and then use the formulae: n(n+1) > n(n+1)(2n +1) 4 . - eeey 2 1. Background Mathematics ‘What is the equivalent count for Thomas's algorithm? 1.20 Compare the work load in Thomas’ algorithm with that for say ‘m iterations of Gauss-Seidel. Given the convergence rate from the eigenvalues of the G matrix of (1.4.12), construct advice for prospective users on whether to use the Thomas algorithm or Gauss~ Seidel 1.21 Extend the Thomas algorithm to deal with upper Hessenberg ma- trices with the form 1 a2 as ayn, by x2 as + Gn 0 by aa 39 a) bn an which is tridiagonal with non-zero elements in the top right-hand part of the matrix. 1.22 Extend Thomas's algorithm to quindiagonal matrices which have in general diagonal elements with two non-zero elements on either side in each row, except in the first two and last two rows which just have ‘two non-zero elements on one side for the first row, and in addition, ‘one non-zero element on the opposite side in the second row. 1.6 Classification of Second Order Partial Differential Equations Consider a general second order quasi-linear equation defined by the equation Rr+Ss+Tt=W (1.6.1) where az Be ae ®e ® aa? 8" pao ME t= GE (1.62) with R=Rxy), S=S(x,y), T T(z,y) and W=W(z,y,2,p,q). (1.6.3) Then the characteristic curves for this equation are defined as curves along which highest partial derivatives are not uniquely defined. In this case these 24 ‘Numerical Methods for Partial Differential Equations derivatives are the second order derivatives r, s and t. The set of linear algebraic equations which these derivatives satisfy can be written down in terms of differentials, and the condition for this set of linear equations to have ‘a non-tnique solution will yield the equations of the characteristics, whose significance will then become more apparent. Hence the linear equations follow as dz = pdr + qdy and also dp = rdv+sdy dq = sdr+tdy to give the linear equations (1.6.4) Rr+Ss+Tt rde + sdy sd + tdy (1.6.5) ‘and there will be no unique solution when Rs T dz dy 0 0 dr dy (1.6.6) which expands to give the differential equation du? 5 (40) pe n(#) -s(#) +r-0 (16:7) But when the determinant in (1.6.6) i zero, the other determinants in Cramer's rule for the solution of (1.6.5) will also be zero, for we assume that (1.6.5) does not have a unique solution. Hence the condition RT W dz 0 dp\= (168) 0 dy dy also holds, and gives an equation which holds along a characteristic, namely —Réy dp —T dx dq-+W drdy =0 (1.6.9) or dp dy pt _ yy Rade + de ae (1.6.10) Returning now to (1.6.6), this equation is a quadratic in dy/dz and there are three possible cases which arise. If the roots are real the characteristics form two families of real curves. A partial differential equation resulting in real characteristics is said to be hyperbolic. The condition is that S? —4RT >0. (1.6.11) 1. Background Mathematics 25 ‘The second case is when the roots are equal to give the parabolic case and the condition S?—4RT =0, (1.6.12) and when the roots are complex the underlying equation is said to be elliptic with the condition S$? —4RT <0. (1.6.13) ‘The importance of characteristics only becomes apparent at this stage. The first feature is the use of characteristics to classify equations. The methods that will be used subsequently are quite different from type to type. In the case of hyperbolic equations, the characteristics are real and are used directly in the solution. Characteristics also play a role in reducing equations to a standard or canonical form. Consider the operator Pe a oat * Soap, tT op (1.6.14) and put € = €(z,y), = n(z,y) and z= ¢ to see what a general change of variable yields. The result is the operator R. a 1 Abo Ges + 28Gb te mE a + Altes) 5S = PE CG) (1.6.15) where Alu,v) = Ru? + Sw 47? (1.6.16) and Bus viguayta) = Rusu + 5S8(uava + wan) + Pee» (1.6.17) The question is now asked for what € and 1 do we get the simplest form? Certainly if € and 1 can be found to make the coefficients A equal to zero, then a simplified form will result. However the condition that A should be zero is a partial differential equation of first order which can be solved analytically (Sneddon, 1957). Different cases arise in the three classifications. In the hyberbolic case when S? ~ 4RT > 0, let Ra? + Sa +T = 0 have roots Ai and Az then § = fi(z,u) and 1 = fa(z,y) where fi(x,y) and fa(x,y) are the solutions of the two factors in the related ordinary differential equations [Z+rcn]fitenteoi}eo ane Hence the required transformations are precisely the defining functions of the characteristic curves. It follows that with this change of variable the partial differential equation becomes. #¢ Bae ~ MEM 6Gerbn) (1.6.19) 6 ‘Numerical Methods for Parti which is the canonical form for the hyperbolic case. In the parabolic case, S$? — 4RT = 0, there is now only one root, and any independent function is used for the other variable in the transformation. Hence AlGe,,) = 0, but it is easy to show in general that A(Exs&) Assy) ~ B*(EerEysterty) = (ART — $?)(Exny — Eyre)” and therefore as $? = 4RT, we must have B(Ex,&y, MM) = 0.and A(nesty) # 0 as 7 is an independent function of x and y. Hence when $? = 4RT, the transformation € = f,(z,y) and = any independent function yields 2, Fa = Ems Gerer (1.6.20) which is the canonical form for a parabolic equation. In the elliptic case there are again two sets of characteristics but they are now complex. Writing € = a + iff and 1 = a ~ iB gives the real form OC 1 (8%, HE ‘agev ~ 4 \aa2 * aR? ‘and hence the elliptic canonical form wC , Fak + age ~ VOB Carbs). (1.6.22) Note that Laplace's equation is in canonical form as is the heat equation, but, ‘the wave equation is not. As an example of reduction to canonical form cot the linear second order partial differential equation Bu Fu, Bu, 2du Fee Fe ott no, (1.6.23) ‘Then the equation of the characteristic curves is ay)? dv (#) -2 +120 (1.6.24) or factorising tu _1)* Lo (1.6.25) a 6: ‘Therefore the transformation for the canonical form is: (1.6.26) (1.6.27) 1. Background Mathematics 2 oa Pu _ Pu (apap eas B58 Bye) (1928) which yields the reduced form Pu Pu Pu uu au Bat anoy te Bet ag Poe = a (1.6.29) with the transformed equation being 1@u_ du 2a oe (1.6.30) From a numerical point of view, the canonical forms reduce the number of different types of equation for which solutions need to be found. Effectively ‘effort can be concentrated on the canonical forms alone, though this is not always the best strategy, and in this spirit the parabolic type will now be ‘considered in detail in the next chapter. Before considering this work the reader may wish to pursue some of the ideas of the previous section in the following EXERCISES 1.23 Classify the following partial differential equations as parabolic, elliptic or hyperbolic: Fo, Pb # ©) gat * aeoy * oy? _ o) S$- 28, Hag 9 B-SS- Fe (4) at 1.24 Find the regions of parabolicity, ellipticity and hyperbolicity for the partial differential equation: Fert Ze ern gt au and sketch the resulting regions in the (zy) plane. 28 Numerical Methods for Partial Different | Equations 1.25 ic form of the characteristic curves for the partial differential equation oe ( +2) Su , eu uy) Oxdy © y By? and hence categorise the equation. ty 1.26 Reduce the equation ae Ox? to canonical form. 1.27 Reduce the equation Be yee ax? * aay * By? ~ to canonical form, and hence find the general analytic solution. 1.28 Reduce the equation Pe, Oe a? an0y SF to canonical form. Make a further transformation to obtain a real canonical form,

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy