Skip to main content

Section 1.4 Analyzing Equations Numerically

Just as numerical algorithms are useful when finding the roots of polynomials, numerical methods will prove very useful in our study of ordinary differential equations. Consider the polynomial f(x)=x2โˆ’2. We do not need a numerical algorithm to see that the roots of this polynomial are x=2 and x=โˆ’2. However, a numerical method such as the Newton-Raphson Algorithm is very useful for approximating 2 as a decimal.
โ€‰1โ€‰
See any calculus text for a description of the Newton-Raphson Algorithm.
Similarly, it may be easier to generate a numerical solution for differential equations if our goal is simply to plot a solution. In addition, there will be differential equations for which it is impossible to find a solution in terms of elementary functions such as polynomials, trigonometric functions, and exponential functions.

Subsection 1.4.1 Eulerโ€™s Method

Suppose that we wish to solve the initial value problem
(1.4.1)yโ€ฒ=f(t,y)=y+t,(1.4.2)y(0)=1.
The equation yโ€ฒ=y+t is not separable, which currently is the only analytic technique at our disposal. However, we can try to find a numerical approximation for the solution. A numerical approximation is simply a table (possibly very large) of t and y values.
We will attempt to find a numerical solution for (1.4.1)โ€“(1.4.2) on the interval [0,1]. Even with the use of a computer, we cannot approximate the solution at every single point on an interval. For the initial value problem
yโ€ฒ=f(t,y)y(t0)=y0,
we might be able to find approximations at a=t0,t1,t2,โ€ฆ,tN=b in [a,b] at best. If we choose t1,t2,โ€ฆ,tN to be equally spaced on [a,b], we can write
tk=t0+kh,
where h=1/N and k=1,2,โ€ฆ,N. We say that h is the step size for our approximation.
Given an approximation Yk for the solution yk=y(tk), the question is how to find an approximate solution Yk+1 at tk+1. To generate the second approximation, we will construct a tangent line to the solution at y(t0)=y0. If we use the slope of the solution curve at t0, then
yโ€ฒ(t0)=f(t0,y0).
Making use of the fact that
y(t0+h)โˆ’y(t0)hโ‰ˆyโ€ฒ(t0,y(t0))=yโ€ฒ(t0,y0)
or equivalently
y(t0+h)=y(t1)โ‰ˆy(t0)+hyโ€ฒ(t0,y0),
the estimate for our solution at t1=t0+h is
Y1=Y0+hf(t0,Y0).
Similarly, the approximation at t2=t0+2h will be
Y2=Y1+hf(t1,Y1).
Our general algorithm is
Yk+1=Yk+hf(tk,Yk).
The idea is to compute tangent lines at each step and use this information to get our next approximation.
The algorithm that we have described is known as Eulerโ€™s method. Let us estimate a solution to (1.4.1)โ€“(1.4.2) on the interval [0,1] with step size h=0.1. Since y(0)=1, we can make our first approximation exact,
Y0=y(0)=1.
To generate the second approximation, we will construct a tangent line to the solution at y(0)=1. If we use the slope of the solution curve at t0=0,
yโ€ฒ(0)=f(y(0),0)=y(0)+0=1+0=1,
and make use of the fact that
y(h)โˆ’y(0)hโ‰ˆyโ€ฒ(0,y(0))ory(h)โ‰ˆy(0)+hyโ€ฒ(0,y(0)),
the estimate for our solution at t=0.1 is
Y1=Y0+hf(t0,Y0)=Y0+h[Y0+t0]=1+(0.1)[1+0]=1.1000.
Similarly, the approximation at t=0.2 will be
Y2=Y1+hf(t1,Y1)=Y1+h[Y1+t1]=1.1000+(0.1)[1.1000+0.1]=1.2200.
Our general algorithm is
Yk+1=Yk+hf(tk,Yk)=Yk+h[Yk+tk]=(1.1)Yk+(0.01)k.
The initial value problem (1.4.1)โ€“(1.4.2) is, in fact, solvable analytically with solution y(t)=2etโˆ’tโˆ’1. We can compare our approximation to the exact solution in Table 1.4.1. We can also see graphs of the approximate and exact solutions in Figure 1.4.2. Notice that the error grows as we get further away from our initial value. In fact, the graph of the approximation for h=0.001 is obscured by the graph of the exact solution. In addition, a smaller step size gives us a more accurate approximation (Table 1.4.3).
Table 1.4.1. Eulerโ€™s approximation for yโ€ฒ=y+t
k tk Yk yk |ykโˆ’Yk| Percent Error
0 0.0 1.0000 1.0000 0.0000 0.00%
1 0.1 1.1000 1.1103 0.0103 0.93%
2 0.2 1.2200 1.2428 0.0228 1.84%
3 0.3 1.3620 1.3997 0.0377 2.69%
4 0.4 1.5282 1.5836 0.0554 3.50%
5 0.5 1.7210 1.7974 0.0764 4.25%
6 0.6 1.9431 2.0442 0.1011 4.95%
10 1.0 3.1875 3.4366 0.2491 7.25%
Figure 1.4.2. Eulerโ€™s approximation for yโ€ฒ=y+t
Table 1.4.3. Step sizes for Eulerโ€™s approximation
tk h=0.1 h=0.02 h=0.001 Exact Solution
0.1 1.1000 1.1082 1.1102 1.1103
0.2 1.2200 1.2380 1.2426 1.2428
0.3 1.3620 1.3917 1.3993 1.3997
0.4 1.5282 1.5719 1.5831 1.5836
0.5 1.7210 1.7812 1.7966 1.7974
0.6 1.9431 2.0227 2.0431 2.0442
0.7 2.1974 2.2998 2.3261 2.3275
0.8 2.4872 2.6161 2.6493 2.6511
0.9 2.8159 2.9757 3.0170 3.1092
1.0 3.1875 3.3832 3.4238 3.4366

Activity 1.4.1. Eulerโ€™s Method and Error.

Consider the initial value problem
yโ€ฒ=x+xyy(0)=1.
(c)
Use Eulerโ€™s method to approximate solutions to the initial value problem for x=0,0.2,0.4,โ€ฆ,1.

Subsection 1.4.2 Finding an Error Bound

To fully understand Eulerโ€™s method, we will need to recall Taylorโ€™s theorem from calculus.
Given the initial value problem
yโ€ฒ=f(t,y),y0=y(t0),
choose t1,t2,โ€ฆ,tN to be equally spaced on [t0,a], we can write
tk=t0+kh,
where h=(aโˆ’t0)/N and k=1,2,โ€ฆ,N. Taylorโ€™s Theorem tells us that
y(tk+1)=y(tk+h)=y(tk)+yโ€ฒ(tk)h+yโ€ณ(tk)2!h2+โ‹ฏ.
If we know the values of y and its derivatives at tk, then we can determine the value of y at tk+1.
The simplest approximation can be obtained by taking the first two terms of the Taylor series. That is, we will use a linear approximation,
yk+1=y(tk+1)โ‰ˆy(tk)+yโ€ฒ(tk)h=y(tk)+f(tk,yk)h.
This gives us Eulerโ€™s method,
Y0=y(t0)Y1=Y0+hf(t0,Y0)Y2=Y1+hf(t1,Y1)โ‹ฎYk+1=Yk+hf(tk,Yk).
The terms that we are omitting, all contain powers of h of at least degree two. If h is small, then hn for n=2,3,โ€ฆ will be very small and these terms will not matter much.
We can actually estimate the error incurred by Eulerโ€™s method if we make use of Taylorโ€™s Theorem.
The condition that there exists a constant L>0 such that
|f(t,y1)โˆ’f(t,y2)|โ‰คL|y1โˆ’y2|,
whenever (t,y1) and (t,y2) are in D=[a,b]ร—R is called a Lipschitz condition. Many of the functions that we will consider satisfy such a condition. If the condition is satisfied, we can usually say a great deal about the function.
Table 1.4.6. Error bound and actual error
k tk Yk yk=y(tk) |ykโˆ’Yk| Estimated Error
0 0.0 1.0000 1.0000 0.0000 0.0000
1 0.1 1.1000 1.1103 0.0103 0.0286
2 0.2 1.2200 1.2428 0.0228 0.0602
3 0.3 1.3620 1.3997 0.0377 0.0951
4 0.4 1.5282 1.5836 0.0554 0.1337
5 0.5 1.7210 1.7974 0.0764 0.1763
6 0.6 1.9431 2.0442 0.1011 0.2235
7 0.7 2.1974 2.3275 0.1301 0.2756
8 0.8 2.4872 2.6511 0.1639 0.3331
9 0.9 2.8159 3.1092 0.2033 0.3968
10 1.0 3.1875 3.4366 0.2491 0.4671
We can now compare the estimated error from our theorem to the actual error of our example. We first need to determine M and L. Since
|f(t,y1)โˆ’f(t,y2)|=|(y1+t)โˆ’(y2+t)|=|y1โˆ’y2|,
we can take L to be one. Since yโ€ณ=2et, we can bound yโ€ณ on the interval [0,1] by M=2e. Thus, we can bound the error by
|y(ti)โˆ’Yi|โ‰คhM2L[eL(tiโˆ’a)โˆ’1]=0.1e(etiโˆ’1)
for h=0.1. Our results are in Table 1.4.6.

Subsection 1.4.3 Improving Eulerโ€™s Method

If we wish to improve upon Eulerโ€™s method, we could add more terms of Taylor series. For example, we can obtain a more accurate approximation by using a quadratic Taylor polynomial,
y(t1)โ‰ˆy0+f(t0,y0)h+yโ€ณ(t0)2h2.
However, we need to know yโ€ณ(t0) in order to use this approximation. Using the chain rule from multivariable calculus, we can differentiate both sides of yโ€ฒ=f(t,y) to obtain
yโ€ณ=โˆ‚fโˆ‚tdtdt+โˆ‚fโˆ‚ydydt=ft+ffy.
Thus, our approximation becomes
y(t1)โ‰ˆy0+f(t0,y0)h+12(ft(t0,y0)+f(t0,y0)fy(t0,y0))h2.
The problem is that some preliminary analytic work must be done. That is, before we can write a program to compute our solution, we must find โˆ‚f/โˆ‚t and โˆ‚f/โˆ‚y, although this is less of a problem with the availability of computer algebra systems such as Sage.
Around 1900, two German mathematicians, Carle Runge and Martin Kutta, independently invented several numerical algorithms to solve differential equations. These methods, known as Runge-Kutta methods, estimate the higher-order terms of the Taylor series to find an approximation that does not depend on computing derivatives of f(t,y).
If we consider the initial value problem
yโ€ฒ=f(t,y),y(t0)=y0,
then
y(t1)=y(t0)+โˆซt0t1f(s,y(s))ds
or
(1.4.3)y1โˆ’y0=y(t1)โˆ’y(t0)=โˆซt0t1f(s,y(s))ds
by the Fundamental Theorem of Calculus. In Eulerโ€™s method, we approximate the right-hand side of (1.4.3) by
y1โˆ’y0=f(t0,y0)h.
In terms of the definite integral, this is simply a left-hand sum. In the improved Eulerโ€™s method or the second-order Runge-Kutta method we will estimate the right-hand side of (1.4.3) using the trapezoid rule from calculus,
y(t1)โˆ’y(t0)=โˆซt0t1f(s,y(s))dsโ‰ˆ(f(t0,y0)+f(t1,y1))h2.
Thus, our algorithm becomes
(1.4.4)y1=y0+(f(t0,y0)+f(t1,y1))h2.
However, we have a problem since y1 appears in the right-hand side of our approximation. To get around this difficulty, we will replace y1 in the right-hand side of (1.4.4) with the Euler approximation for y1. Thus,
y1=y0+(f(t0,y0)+f(t1,y0+f(t0,y0)h))h2.
To understand that the second-order Runge-Kutta method is actually an improvement over the traditional Eulerโ€™s method, we will need to use the Taylor approximation for a function of two variables. Let us assume that f(x,y) is defined on some rectangle and that all of the derivatives of f are continuously differentiable. Then
f(x+h,y+k)=f(x,y)+hโˆ‚โˆ‚xf(x,y)+kโˆ‚โˆ‚yf(x,y)+12!(h2โˆ‚2โˆ‚2xf(x,y)+hkโˆ‚2โˆ‚xโˆ‚yf(x,y)+k2โˆ‚2โˆ‚2yf(x,y))+13!(h3โˆ‚3โˆ‚3xf(x,y)+3h2kโˆ‚3โˆ‚2xโˆ‚yf(x,y)+hk2โˆ‚3โˆ‚xโˆ‚2yf(x,y)+k3โˆ‚3โˆ‚3yf(x,y))+โ‹ฏ.
As in the case of the single variable Taylor series, we can write a Taylor polynomial if the Taylor series is truncated,
f(x+h,y+k)=โˆ‘n=0N1n!(hโˆ‚โˆ‚x+kโˆ‚โˆ‚y)nf(x,y)+1(N+1)!(hโˆ‚โˆ‚x+kโˆ‚โˆ‚y)N+1f(xโ€•,yโ€•),
where the second term is the remainder term and (xโ€•,yโ€•) lies on the line segment joining (x,y) and (x+h,y+k).
In the Improved Eulerโ€™s Method, we adopt a formula
y(t+h)=y(t)+w1F1+w2F2,
where
F1=hf(t,y)F2=hf(t+ฮฑh,y+ฮฒF1).
That is,
(1.4.5)y(t+h)=y(t)+w1hf(t,y)+w2hf(t+ฮฑh,y+ฮฒhf(t,y)).
The idea is to choose the constants w1, w2, ฮฑ, and ฮฒ as accurately as possible in order to duplicate as many terms as possible in the Taylor series
(1.4.6)y(t+h)=y(t)+hyโ€ฒ(t)+h22!yโ€ณ(t)+h33!yโ€ณ(t)+โ‹ฏ.
We can make equations (1.4.5) and (1.4.6) agree if we choose w1=1 and w2=0. Since yโ€ฒ=f, we obtain Eulerโ€™s method.
If we are more careful about choosing our parameters, we can obtain agreement up through the h2 term. If we use the two variable Taylor series to expand f(t+ฮฑh,y+ฮฒhf), we have
f(t+ฮฑh,y+ฮฒhf)=f+ฮฑhft+ฮฒhffy+O(h2),
where O(h2) means that of the subsequent terms have a factor of hn with nโ‰ฅ2. Using this expression, we obtain a new form for (1.4.5),
(1.4.7)y(t+h)=y(t)+(w1+w2)hf+ฮฑw2h2ft+ฮฒw2h2ffy+O(h3).
Since yโ€ณ=ft+fyf by the chain rule, we can rewrite (1.4.6) as
(1.4.8)y(t+h)=y(t)+hf+h22!(ft+ffy)+O(h3).
We can make equations (1.4.7) and (1.4.8) agree up through the quadratic terms if we require that
w1+w2=1,ฮฑw2=12,ฮฒw2=12.
If we choose ฮฑ=ฮฒ=1 and w1=w2=1/2, these equations are satisfied, and we obtain the improved Eulerโ€™s method
y(t+h)=y(t)+h2f(t,h)+h2f(t+h,y+hf(t,y)).
The improved Eulerโ€™s method or the second-order Runge-Kutta method is a more sophisticated algorithm that is less prone to error due to the step size h. Eulerโ€™s method is based on truncating the Taylor series after the linear term. Since
y(t+h)=y(t)+hyโ€ฒ(t)+O(h2),
we know that the error depends on h. On the other hand, the error for the improved Eulerโ€™s method depends on h2, since
y(t+h)=y(t)+hyโ€ฒ(t)+h22!yโ€ณ(t)+O(h3).
If we use Simpsonโ€™s rule to estimate the integral in
y(t1)โˆ’y(t0)=โˆซt0t1f(s,y(s))ds,
we can improve our accuracy up to h4. The idea is exactly the same, but the algebra becomes much more tedious. This method is known as the Runge-Kutta method of order 4 and is given by
y(t+h)=y(t)+16(F1+2F2+2F3+F4),
where
F1=hf(t,y)F2=hf(t+12h,y+12F1)F3=hf(t+12h,y+12F2)F4=hf(t+h,y+F3).

Subsection 1.4.4 Important Lessons

  • We can use Eulerโ€™s method to find an approximate solution to the initial value problem
    yโ€ฒ=f(t,y),y(a)=ฮฑ
    on an interval [a,b]. If we wish to find approximations at N equally spaced points t1,โ€ฆ,tN, where h=(bโˆ’a)/N and ti=a+ih, our approximations should be
    Y0=ฮฑ,Y1=Y0+hf(ฮฑ,Y0),Y2=Y1+hf(t1,Y1,)โ‹ฎYk+1=Yk+hf(tk,Yk),YN=YNโˆ’1+hf(tNโˆ’1,YNโˆ’1).
    In practice, no one uses Eulerโ€™s method. The Runge-Kutta methods are better algorithms.
  • Taylorโ€™s Theorem is a very useful tool for studying differential equations. If x>x0, then
    f(x)=f(x0)+fโ€ฒ(x0)(xโˆ’x0)+fโ€ณ(x0)2!(xโˆ’x0)2+โ‹ฏ+f(n)(ฮพ)n!(xโˆ’x0)n,
    where ฮพโˆˆ(x0,x).
  • Error analysis rate of convergence is very important for any numerical algorithm. Our approximation is more accurate for smaller values of h. Under reasonable conditions we can also bound the error by
    |y(ti)โˆ’Yi|โ‰คhM2L[eL(tiโˆ’a)โˆ’1],
    where y is the unique solution to the initial value problem
    yโ€ฒ=f(t,y),y(a)=ฮฑ.
  • The condition that there exists a constant L>0 such that
    |f(t,y1)โˆ’f(t,y2)|โ‰คL|y1โˆ’y2|,
    whenever (t,y1) and (t,y2) are in D=[a,b]ร—R is called a Lipschitz condition.
  • Using Taylor series, we can develop better numerical algorithms to compute solutions of differential equations. The Runge-Kutta methods are an important class of these algorithms.
  • The improved Eulerโ€™s method is given by
    y(t+h)=y(t)+h2f(t,h)+h2f(t+h,y+hf(t,y))
    with the error bound depending on h2.
  • The Runge-Kutta method of order 4 is given by
    y(t+h)=y(t)+16(F1+2F2+2F3+F4),
    where
    F1=hf(t,y)F2=hf(t+12h,y+12F1)F3=hf(t+12h,y+12F2)F4=hf(t+h,y+F3)
    with the error bound depending on h4.

Reading Questions 1.4.5 Reading Questions

1.

We can use Taylor polynomials to approximate a function f(x) near a point x0. Explain why this approximation can only be expected to be accurate near x0.

2.

Should we always use Eulerโ€™s method when approximating a solution to an initial value problem? Why or why not?

Exercises 1.4.6 Exercises

Finding Solutions.

For each of the initial value problem
yโ€ฒ=f(t,y),y(t0)=y0
  1. Write the Eulerโ€™s method iteration Yk+1=Yk+hf(tk,Yk) for the given problem, identifying the values t0 and y0.
  2. Using a step size of h=0.1, compute the approximations for Y1, Y2, and Y3.
  3. Solve the problem analytically if possible. If it is not possible for you to find the analytic solution, use Sage.
  4. Use the results of (c) and (d), to construct a table of errors for Yiโˆ’yi for i=1,2,3.

7.

Consider the differential equation
dydt=3yโˆ’1
with initial value y(0)=2.
  1. Find the exact solution of the initial value problem.
  2. Use Eulerโ€™s method with step size h=0.5 to approximate the solution to the initial value problem on the interval [0,2] Your solution should include a table of approximate values of the dependent variable as well as the exact values of the dependent variable. Make sure that your approximations are accurate to four decimal places.
  3. Sketch the graph of the approximate and exact solutions.
  4. Use the error bound theorem (Theorem 1.4.5) to estimate the error at each approximation. Your solution should include a table of approximate values of the dependent variable the exact values of the dependent variable, the error estimates, and the actual error. Make sure that your approximations are accurate to four decimal places.

8.

In this series of exercises, we will prove the error bound theorem for Eulerโ€™s method (Theorem 1.4.5).
  1. Use Taylorโ€™s Theorem to show that for all xโ‰ฅโˆ’1 and any positive m,
    0โ‰ค(1+x)mโ‰คemx.
  2. Use part (1) and geometric series to prove the following statement: If s and t are positive real numbers, and {ai}i=0k is a sequence satisfying
    a0โ‰ฅโˆ’tsai+1โ‰ค(1+s)ai+t, for i=0,1,2,โ€ฆ,k,
    then
    ai+1โ‰คe(i+1)s(ts+a0)โˆ’ts.
  3. When i=0, y(t0)=Y0=ฮฑ and the theorem is true. Use Eulerโ€™s method and Taylorโ€™s Theorem to show that
    |yi+1โˆ’Yi+1|โ‰ค|yiโˆ’Yi|+h|f(ti,yi)โˆ’f(ti,Yi)|+h22|yโ€ณ(ฮพi)|
    where ฮพiโˆˆ(ti,ti+1) and yk=y(tk).
  4. Show that
    |yi+1โˆ’Yi+1|โ‰ค|yiโˆ’Yi|(1+hL)+Mh22.
  5. Let aj=|yjโˆ’Yj| for j=0,1,โ€ฆ,N, s=hL, and t=Mh2/2 and apply part (2) to show
    |yi+1โˆ’Yi+1|โ‰คe(1+i)hL(|y0โˆ’Y0|+Mh22hL)โˆ’Mh22hL.
    and derive that
    |yi+1โˆ’Yi+1|โ‰คMh2L(e(ti+1โˆ’a)Lโˆ’1)
    for each i=0,1,โ€ฆ,Nโˆ’1.
Hint.
Hints for part (2):
  • For fixed i show that
    ai+1โ‰ค(1+s)ai+tโ‰ค(1+s)[(1+s)aiโˆ’1+t]+tโ‰ค(1+s){(1+s)[(1+s)aiโˆ’2+t]+t}+tโ‹ฎโ‰ค(1+s)i+1a0+[1+(1+s)+(1+s)2+โ‹ฏ+(1+s)i]t.
  • Now use a geometric sum to show that
    ai+1โ‰ค(1+s)i+1a0+ts[(1+s)i+1โˆ’1]=(1+s)i+1(ts+a0)โˆ’ts.
  • Apply part (1) to derive
    ai+1โ‰คe(i+1)s(ts+a0)โˆ’ts.

Subsection 1.4.7 Sageโ€”Numerical Routines for solving ODEs

Not all differential equations can be solved using algebra and calculus even if we are very clever. If we encounter an equation that we cannot solve or use Sage to solve, we must resort to numerical algorithms like Eulerโ€™s method or one of the Runge-Kutta methods, which are best implemented using a computer. Fortunately, Sage has some very good numerical solvers. Sage will need to know the following to solve a differential equation:
Suppose we wish to solve the initial value problem dy/dx=x+y, y(0)=1. We can use Sage to find an algebraic solution.
We can also use Eulerโ€™s method to find a solution for our initial value problem.
The syntax of eulers_method for the inital value problem
y=f(x,y)f(x0)=y0
with step size h on the interval [x0,x1] is eulers_method(f, x0, y0, h, x1) Notice that we obtained a table of values. However, we can use the line command from Sage to plot the values (x,y).
As we pointed out, eulers_method is not very sophisticated. We have to use a very small step size to get good accuracy, and the method can generate errors if we are not careful. Fortunately, Sage has much better algorithms for solving initial value problems. One such algorithm is desolve_rk4, which implements the fourth order Runge-Kutta method.
Again, we just get a list of points. However, desolve_rk4 has some nice built-in graphing utilities.
Write the Sage commands to compare the graphs obtained using eulers_method and desolve with the exact solution.
Not only is desolve_rk4 more accurate, it is much easier to use. For more information, see www.sagemath.org/doc/reference/calculus/sage/calculus/desolvers.html.
You have attempted 1 of 3 activities on this page.