Can we differentiate or integrate a Taylor series similar to how we differentiate or integrate a polynomial? If so, how does doing so affect the values of for which the Taylor series converges?
What is an alternating series and how can we determine whether or not it converges? How do the partial sums of a converging alternating series accurately estimate its exact sum?
So far, we have focused our attention on a collection of five basic functions functions — ,,,, and — and their Taylor series centered at . One of the reasons we were able to find these Taylor series is the patterns that arise in the derivatives of each of these functions. While we can always use Definition 8.4.1 to find the first few terms of the Taylor series, for most functions it is challenging to find a pattern among the various derivatives that allows us to state the general th term of the series.
We know the coefficients of the Taylor series for centered at are given by , so we start calculating some derivatives. Note that we can write . Calculate , and .
Since the derivatives of become so complicated, we consider a different approach to finding the Taylor series centered at for . We take advantage of the fact that has a Taylor series expansion we already know, and observe that has an algebraic structure similar to .
Note that , so we just found an infinite series representation for without calculating derivatives of , by using the series we had already calculated for . For what values of do you expect this series for will converge?
Subsection8.5.1Using substitution and algebra to find new Taylor series expressions
The substitution technique we used in Preview Activity 8.5.1 can be used to find the Taylor series for any function whose structure is similar to that of a Taylor series we already know.
Then, multiplying both sides of Equation (8.5.2) by , we find that
.
Since Equation (8.5.1) is valid for every real number , letting tells us that Equation (8.5.2) is valid for every real number . The Ratio Test can be used to show that multiplying every term of a Taylor series by the same power of does not change the set of -values for which the series converges, so the Taylor series for also converges for every value of .
Because the approaches in Preview Activity 8.5.1 and Example 8.5.1 each require us to use a known Taylor series, we restate the Taylor series we’ve established so far for important functions in Table 8.5.2.
Important Taylor series representations.
Table8.5.2.Taylor series and the -values where they converge for important functions.
Use known Taylor series and substitution and algebraic techniques to find a Taylor series representation for each of the following functions. In addition, state the interval of -values for which you expect each Taylor series to converge.
Subsection8.5.2Differentiating and integrating Taylor series
In Chapter 5, we discussed the challenge posed by definite integrals such as
.
Because we are unable to find a simple algebraic antiderivative for the function , we cannot use the First Fundamental Theorem of Calculus to evaluate the integral exactly. We learned in Section 5.2 that the Second Fundamental Theorem of Calculus provides us with an antiderivative of a given function by using an integral function: one antiderivative of is
Our recent work with Taylor series now suggests another way to find an antiderivative for , and this approach also provides new options for finding additional Taylor series. In Activity 8.5.2, as part of our work in finding a Taylor series for we found that
(8.5.3)
and that this representation of is valid for every value of . This infinite series representation suggests that we could find an antiderivative of by using Equation (8.5.3) to actually evaluate the integral . Doing so, we see
While it is natural to wonder how integrating a Taylor series might change the values of for which the series converges, it turns out that integrating a Taylor series has almost no effect 1
It is possible for the convergence status at the endpoints of the interval to change, but we are normally not concerned with those specific -values.
on where the series converges (nor does differentiating such a series). This fact is stated formally in a result called The Power Series Differentiation and Integration Theorem. (A power series is any series of the form ; every Taylor series is a power series, and a famous result called Borel’s Theorem tells us that every power series is in fact the Taylor series of a related function.)
Stated more informally, the Power Series Differentiation and Integration Theorem tells us that when it comes to differentiating or integrating a Taylor series, we can do so just as if they were finite polynomials: we can differentiate or integrate the Taylor series term-wise following the Power Rule for differentiating or integrating . Moreover, doing so doesn’t change the interval on which the Taylor series converges 2
Differentiating or integrating can change the convergence status at the endpoints of the interval, but we again will not concern ourselves with that issue in this course.
. Since polynomials are the easiest of all functions to differentiate and integrate, we can now find many more Taylor series of interesting functions.
In Section 8.4, we used Definition 8.4.1 to find the Taylor series expansion for . Here we use substitution and integration to develop the Taylor series of in a different way.
In probability theory, is important because of its connection to the normal distribution, which is represented by a bell curve. Indeed, represents the fraction of a normally distributed characteristic in a population that lies between and . How can you use your result in (c) to estimate ?
Near the end of Section 8.4, we noted the important big-picture perspective that for familiar basic functions that are infinitely differentiable, such as , not only can we find the function’s Taylor series and determine the -values for which the Taylor series converges, but the Taylor series converges to the function itself. Furthermore, we noted that these representations play a key role in how computers provide decimal approxations to quantities such as , which can be represented as
.
Our most recent work with Taylor series shows that the news is better still: now we can easily represent even more complicated functions with Taylor series, such as and , and determine their antiderivatives using their infinite Taylor series and treating that representation just like a polynomial. In the last portion of this section, we investigate further how certain infinite series of numbers can be easily and accurately approximated using partial sums that result from evaluating Taylor series.
While we are unable to find an elementary algebraic antiderivative of , if we use the Taylor series
and apply the Fundamental Theorem of Calculus to the series representation of , we find that
(8.5.4)
The infinite series in Equation (8.5.4) is an example of an alternating series of real numbers. It turns out to be straightforward to determine whether or not an alternating series converges, and also to estimate the value of a convergent alternating series.
We will only consider alternating series for which the sequence of positive numbers decreases to . The following example illustrates two general results that hold for any alternating series whose terms decrease to . We use a geometric series so that we can know its exact sum and compare certain computations to that sum.
Investigate the partial sums of the alternating geometric series . How do the partial sums compare to the exact sum of the series, and how can partial sums accurately estimate the value of the sum?
In both the table and the figure, we see how consecutive partial sums move back and forth above and below the exact sum of the infinite series, , and moreover how the amount the next partial sum lies above or below is less than the amount by which the previous partial sum deviated. For instance,
but
.
Moreover, since , and is the last and smallest term (in absolute value) in , is the total vertical distance from the point to in Figure 8.5.6. This implies that is more than the error in comparing and . Said differently, we are guaranteed that
so the next term in the sum provides a bound on the error in a given partial sum. A similar argument can be made for any value of .
If we were to compute the partial sums of any alternating series whose terms decrease to zero and plot the points as we did in Figure 8.5.6, we would see a similar picture: the partial sums alternate above and below the value to which the infinite alternating series converges. In addition, because the terms go to zero, the amount a given partial sum deviates from the total sum is at most the next term in the series. This result is formally stated as the Alternating Series Estimation Theorem.
Again, this result simply says: if we use a partial sum to estimate the exact sum of an alternating series, the absolute error of the approximation is less than the next term in the series.
so the 100th partial sum is within 0.0099 of the exact value of the series. In addition, it turns out that and , so we see that the difference between and is indeed less than the error bound of from the Alternating Series Estimation Theorem.
Use the fact that to estimate to within . Do so without entering “” on a computational device. After you find your estimate, enter “” on a computational device and compare the results.
Use this series representation to estimate to within . Then, compare what a computational device reports when you use it to estimate the definite integral.
Find the Taylor series for and then use the Taylor series and to estimate the value of to within . Compare your result to what a computational device reports when you use it to estimate the definite integral.
The Power Series Differentation and Integration Theorem tells us that we can differentiate or integrate a Taylor series in the natural way and that doing so has essentially no impact on the set of -values for which the series converges. For instance, we might note that if we first found the Taylor series for , which is
and converges for every value of , it follows by differentating that
which is precisely the Taylor series for that we found by taking derivatives and applying Definition 8.4.1.
An alternating series is one whose terms alternate in sign, usually represented by where for all values of . Any alternating series whose terms approach zero as is guaranteed to converge. Moreover, the Alternating Series Estimation Theorem tells us that we can estimate the exact value of a converging alternating series by using a partial sum, and the error of that approximation is at most the next term in the series. That is,
Note that when the constant in the denominator is not 1, we still can use the geometric series, but we have to multiply by a form of 1 that helps us. We’ll use .
For the following indefinite integral, find the full power series centered at and then give the first 5 nonzero terms of the power series and the open interval of convergence.
For the following indefinite integral, find the full power series centered at and then give the first 5 nonzero terms of the power series and the open interval of convergence.
how many terms do you have to compute in order for your approximation (your partial sum) to be within 0.0000001 from the convergent value of that series?
In this exercise we find the Taylor series representation for two famous functions, the Fresnel integral functions
and
.
The Fresnel integral functions are important in optics and are used in the design of Fresnel lenses such as those found in lighthouses along the Lake Michigan shore.
Perhaps the most important property of the function is that ; that is, the function is its own derivative. Suppose that we didn’t yet know the coefficients of the Taylor series expansion for , so we just said
.(8.5.5)
Let in Equation (8.5.5); what does this tell us about the value of ?
Take the derivative of both sides of Equation (8.5.5) and call your resulting equation for “Equation 2”. Why do Equation (8.5.5) and Equation 2 together tell us that ? Combine this observation with your conclusion in (b) and note that you now know the numerical value of both and .
Observe that your result in (a) is an alternating series. Estimate the value of that alternating series to within 0.01. How many terms of the series are needed to do so?