# Problems, Resources, and Issues in Mathematics Undergraduate Studies: FOOD FOR THOUGHT: A FEW NUMERICAL DELICACIES

FOOD FOR THOUGHT: A FEW NUMERICAL DELICACIES

Hong, L

ABSTRACT:

Many students, when they take an elementary differential equations course for the first time, bring with them misconceptions from numerical methods that they had learnt in their calculus courses, most notable of which concerns the mesh width in using a numerical method. It is important that we strive to dispel any of these misconceptions as well as to make students aware that there is often a trade-off between the complexity of a numerical method and its computational cost. In this note, we provide examples to do that.

KEYWORDS: Differential equations, first-order initial value problem, numerical methods, the Euler method, the Runge-Kutta method, errors, computational costs.

1 INTRODUCTION

Two methods that many introductory textbooks [1, 2, 3] discuss are the Euler method and the 4th-order Runge-Kutta method. Some introductory textbooks  also discuss a few other methods, such as the backward Euler method, the Adams-Bashforth method, the Adams-Moulton method, and, more generally, predictor-corrector methods and adaptive methods. In this note, we restrict our discussion to the Euler method and the 4th-order Runge-Kutta method.

When solving (1) numerically, one has to be careful not to be misled by the numerical solution, for errors may arise in subtle ways. For example, it may be that the accumulated round-off error grows large or that the solution is unbounded in the interval of interest. In general, few problems arise when [function of] is linear, but things become trickier when [function of] is nonlinear. In the latter case, a careful mix of analytical, graphical, and numerical work may provide useful information about what is happening in a particular solution. See [1, 4] for a discussion on errors.

An important aspect of any numerical method is the mesh width that is used, for it influences both the error and the computational cost. Due to the limited scope of a typical elementary differential equations textbook, its discussion on the errors and the computational costs associated with a numerical method and the importance of the choice of a mesh width may be very basic; on the other hand, the presentation of these topics in a typical numerical analysis textbook may be rather technical, involving the use of Taylor series, for instance. In this note, we seek to bridge these two levels of discussion on errors and computational costs by providing simple, concrete examples that illustrate clearly how a chosen mesh width may affect both. This note is organized as follows.

In Section 3, we reinforce Fox’s  point that what may be considered to be a small mesh width in one instance may not be small in another. We show in Section 4 that even when the graph of a numerical solution captures the overall behavior of the exact solution, the numerical solution may provide a poor approximation of the solution itself. We also illustrate in that section a trade-off between the complexity of a numerical method and its computational cost: a method that is less complex to implement may incur more computational cost than a method that is more complex to implement. Finally, we illustrate in Sections 5 and 6 that, in fact, in some cases no mesh width may be small enough to allow a numerical method to capture the behavior of the exact solution on the solution’s entire domain.

Today, students can easily explore IVPs of the type we discuss here graphically using a computer algebra system like Matlab or even a programmable graphing calculator like the Texas Instruments TI-89 . The figures that appear here were generated using Matlab 6; the add-on package dfield6.m  was used to generate Figure 5(a). Like Fox , we prefer that our students program the numerical methods themselves; the Matlab code we used is available on the Web at http://math.ucdavis.edu/~1hong/num_sol_ode.

2 THE NUMERICAL METHODS

A derivation can be found in many elementary differential equations textbooks; one derivation using Simpson’s rule can be found in , for example. The 4th-order Runge-Kutta method may be thought of as a weighted average of the function [function of] evaluated at different points.

It should not be a surprise that the 4th-order Runge-Kutta method belongs to what is called the Runge-Kutta class of methods; however, it may be a surprise that the Euler method also belongs to the Runge-Kutta class of methods and is, in fact, the 1st-order Runge-Kutta method in the class. The 4th-order Runge Kutta method was the method that was originally developed by Runge and Kutta, and it is often simply referred to as the Runge-Kutta method .

We remark that the order of the method refers to its truncation error [4, p. 331](or global truncation error )1: the Euler method, being a 1st-order method, has a truncation error that is a constant times the step size, h, and the Runge-Kutta method, being a 4th-order method, has a truncation error that is a constant times h^sup 4^. Loosely speaking, the error in an nth-order numerical method reduces by 1/2^sup n^ every time the mesh width is halved. Hence, loosely speaking, the error in the Euler method is reduced by 1/2 every time the mesh width is halved, and the error in the Runge-Kutta method is reduced by 1/2^sup 4^ = 1/16 every time the mesh width is halved. See [1, 4], for example, for a discussion on errors.

3 AN EXAMPLE IN FOX

One objective for “Lab 2” is to provide students an example where a mesh width of h = 1 for the Euler method “causes the graph of numerical solution [sic] to lose all semblance of capturing the reasonableness to the actual solution graph,” the point being that what students may consider to be small (h = 1) may not be small enough. It turns out, in this example, that h = 0.1 is small enough to produce a numerical solution whose graph does resemble that of the exact solution quite well. This is illustrated in Figure 5 in  that we reproduce here in Figure 1. Fox then remarks that

The temptation, at this point, for the student is to have their misconception reinforced that Euler’s method is always better for smaller values of h.

Fox’s remark rings true and he provides an example in “Lab 3”  where reducing the mesh size does not improve the numerical result; but his remark also begs the question, What do we do if we do not reduce the mesh width? One answer is to try a different numerical method. To see this, we apply the Runge-Kutta method with h = 1 to solve (4). (Of course, if we obtain an unsatisfactory result using a particular numerical method, we can always retry the method with a smaller mesh width, whereas we may not know of another method to try. Nevertheless, using a different method should be an alternative to reducing the mesh width that students should keep in mind.)

We see in Figure 2 that the Runge-Kutta method, a 4th-order method compared to the 1st-order Euler method, yields a numerical solution with h = 1 whose graph resembles that of the exact solution, albeit not quite as well as the Euler method does with h = 0.1 (Figure 1).

Now, the computational cost of a numerical method may depend on one’s programming prowess to streamline the code; for our purposes, however, we shall satisfy ourselves with counting the number of operations used in the algorithm. Thus, looking at (2), we see that the Euler method requires three operations at each step, whereas, looking at (3), we see that the Runge-Kutta method requires 25 operations at each step. Table 1 compares the number of operations for the Euler method with that for the Runge-Kutta method in solving (4) for several different mesh widths. What may a student conclude from Table 1?

On the face of it, Table 1 seems to indicate that the Runge-Kutta method is considerably more expensive than the Euler method; however, if we notice that the error in using the Euler method roughly halves whenever the mesh width is halved, we see that we would need4 h

4 AN EXPONENTIAL EXAMPLE

We plot the numerical solution of (5) using the Euler method with h = 0.1 in Figure 3.

We see that the graph of the numerical solution resembles a decaying exponential as we expect; however, an analysis of the error reveals that, even with a mesh width of h = 0.1, the numerical solution is a poor approximation to the exact solution; see Table 2.

From our earlier experience, our choices to obtain a better numerical solution are to reduce the mesh width or to try a different method.

Table 2 gives the max-norm errors for both methods for three different mesh widths. As before, we may estimate that we would need h

We restrict ourselves to using the Runge-Kutta method in the remainder of this note.

5 AN EXAMPLE IN HOSKING et al.

One thing the last two examples, (4) and (5), have in common is that we were able to improve the numerical solution, whether using the Euler method or the Runge-Kutta method, by reducing the mesh width.

This example is presented in Hosking et al. [6, p. 149] to motivate the need for numerical methods to solve differential equations; however, they do not apply any numerical methods to solve (6).

The numerical solution with h = 0.02 is shown in Figure 4. In the figure, we see that something strange begins to happen between x = 1.5 and x = 1.6: the numerical solution begins to behave erratically. This behavior appears to be qualitatively different from that shown in Figure 1, in which the numerical solution with h = 1 appears to oscillate about the exact solution.

To get a clearer picture of what is happening with the numerical solution of (6), we plot the numerical solution together with the slope field in Figure 5(a).

We see in Figure 5(a) that wherever the numerical solution approaches a point where the slope field nears vertical, the numerical solution jumps. Figure 5(b) shows that this is the case even if we reduce the mesh width to h = 0.005. Indeed, because y’ becomes unbounded as y [arrow right] x, which is clear from (6), neither reducing the mesh width nor trying another method will eliminate the erratic behavior.

This observation may lead students to believe that such erratic behavior occurs only outside a solution’s domain and, therefore, may be ignored safely; however, the next example shows that such erratic behavior may also occur inside a solution’s domain.

6 A CUBE ROOT EXAMPLE

Note that the solution exists on the entire real line and, in particular, it exists on the interval [-2, 2].

We plot the numerical solution of (9) together with the exact solution (10) in Figure 7.

In the figure, we see that the numerical solution behaves erratically, in the same way as before, beginning somewhere between x = 0.3 and x = 0.5, and that the numerical solution fails to approximate the exact solution beyond that despite the fact that the exact solution exists. Again, this is because y’ becomes unbounded as x [arrow right] 3^sup -1/2^ – 3^sup -3/2^ [asymptotically =] 0.3849. Hence, again, neither reducing the mesh width nor trying another method will eliminate the erratic behavior.

7 CONCLUSION

Many students, when they take an elementary differential equations course for the first time, bring with them misconceptions from numerical methods that they had learnt in their calculus courses, most notable of which concerns the mesh width in using a numerical method . In particular, students may not realize that the notion of a “small” mesh width is not absolute, but is relative to a problem; and, after they realize this, they may believe that if “small” is good, then “smaller” must be better. It is important that we strive to dispel any of these misconceptions as well as to make students aware that there is often a trade-off between the complexity of a numerical method and its computational cost; we should also make students aware that there is an increased chance of round-off error accumulating disproportionately as the number of operations is increased. The examples we provided are intended to do that.

1 See [4, p. 330] for a brief discussion on the disparity in terminology among authors.

2 This error is proportional to the truncation error for any “reasonable” method provided that the solution is sufficiently smooth [4, p. 330].

3 We should not compare the errors in Table 1 “diagonally”-for instance, we should not compare the error in the Runge-Kutta method with h = I with the error in the Euler method with h = 0.1-because the errors are computed at the grid points, which are not the same for different mesh widths.

4 This bound and a similar one in Section 4 are not least upper bounds.

REFERENCES

1. Boyce, William E. and Richard C. DiPrima. 2003. Elementary Differential Equations and Boundary Value Problems, 7th éd. New York: John Wiley & Sons, Inc.

2. Diacu, Florin. 2000. An Introduction to Differential Equations: Order and Chaos. New York: W. H. Freeman and Co.

3. Edwards, C. Henry and David E. Penney. 2000. Differential Equations: Computing and Modeling, 2nd ed. Upper Saddle River: Prentice-Hall, Inc.

4. Epperson, James F. 2002. An Introduction to Numerical Methods and Analysis. New York: John Wiley & Sons, Inc.

5. Fox, William P. 2002. Student misconceptions in using Euler’s method in ordinary differential equations and the importance of step size. PRIMUS. 12(3): 262-276.

6. Hosking, R. J. , S. Joe, D. C. Joyce, and J. G. Turner. 1996. First Steps in Numerical Analysis, 2nd ed. London: Arnold.

7. Polking, J. C. ODE Software for Matlab (Matlab add-on package), http://math.rice.edu/~dfield/.

8. Texas Instruments TI-89. dsolve (solve and graph differential equations), http://education.ti.com/us/product/tech/89/down/89tips07.html.

L. Hong1 and J. B. Thoo2

ADDRESS: (1) Department of Mathematics, University of California, Davis CA 95616-8633 USA and (2) Mathematics Department, Yuba College, Marysville CA 95901-7699 USA.

BIOGRAHICAL SKETCHS

L. Hong: “L” is for Lan and Lan is a freshly-minted PhD who finished her thesis on the Prandtl equations just last year. Currently, Lan is a lecturer at the University of California, Davis. Besides mathematics, Lan is also interested in computers. In fact, Lan learnt a great deal about her PC after she crashed it twice within three months. Lan’s excuse for messing up her computer is that she likes to try new things, about which she quotes Albert Einstein: “Anyone who has never made a mistake has never tried anything new.”

J. B. Thoo: “J” is for John and John hails from the whacky state of California. Today, John teaches at a community college and tries his best to enjoy his second childhood. His favorite toy today is a motorcycle, the Rodney Dangerfield (“no respect”) of the BMW line, an R1150RS. In his recent adulthood, John wrote a thesis in PDEs on nonlinear waves in random media, but he also has a T-shirt that has “The truly educated never graduate” (©High Cotton, Inc.) printed on the front.

POSTSCRIPT

Lan and John are doctoral siblings, both having been sent forth by J. K. Hunter, their advisor.