Taylor Traitor Functions
Stanislav Sýkora, Extra Byte, Via R.Sanzio 22C, Castano Primo, Italy 20022
in Stan's Library, Ed.S.Sykora, Vol.I. First release January 10, 2006
Permalink via DOI:  10.3247/SL1Math06.000
Other Math articles in Stan's Library | SCIENCE Links Stan's Courses | Stan's HUB

I have often encountered difficulties making students grasp some of the less evident characteristics of the Taylor power series. In particular, it is sometimes difficult to make it clear that a function f(x) may, at a given point of its domain, possess derivatives of all orders and give rise to a convergent formal Taylor series, and yet the limit of that series need not coincide with f(x). Since a simple example is worth tens of pages, I have thought up one and present it in the first part of this educational Note. The Note then proceeds with the analysis of some not-so-trivial aspects of this special case.

1. Introduction

Let f(x) be a real function of real variable, I = (a,b) an interval of the set of real numbers R, and y a fixed element of I. We will assume that the following conditions are satisfied:
(a) f(x) has in I derivatives f(k)(x) of all orders 
(b) the following  formal Taylor series converges everywhere in I to a function uy(x)

(1)   

At this point, most students feel safe to identify uy(x) with f(x), forgetting that such a step requires an independent proof of the convergence to zero of the residue term Rn,y(x) in the Taylor's formula

(2)   

and that such a convergence is by no means guaranteed (see Appendix A for more details).

Among all functions which satisfy the conditions (a) and (b), we shall call Taylor friendly at x = y those for which f(x) is identical with uy(x), while those for which f(x) differs from uy(x) will be called Taylor traitors.

For the sake of completeness - and before proceeding further - we reproduce the Taylor series theorem in its standard textbook form, using the two most often used forms for the residue:

Theorem: Let f(x) possess continuous derivatives up to order n in the whole interval [y,x]. Then, expressing f(x) as indicated in Equation (2), there exist in [y,x] values θ and η such that
(3a)    
(3b)    

Equations (3a) and (3b) are, respectively, the Lagrange form and the Cauchy form of the residue.

2. Example of a Taylor traitor

A representative example of a Taylor traitor is the function

(4)   

in the vicinity of x = 0 (see Figure 1).

It is easy to verify by elementary methods that F(x) satisfies condition (a). Moreover, it is shown in Appendix B that the k-th derivative of F(x) at x=0 is 0 for any k = 0, 1, 2, 3, ... This means that all terms of the formal Taylor series of F(x) at x = 0 are zero. Consequently, the series certainly converges and its limit is U(x) = 0 which, however, is manifestly different from f(x) !!!

The only way to reconcile this fact with the Taylor formula theorem is to admit/realize that in the case of F(x) the residue is equal to the function itself - a fact which may look unusual but which is not at all incompatible with the theorem itself.

The problem with the theorem is that it does guarantee that the residue convergence to anything at all, much less to zero!

3. Can the situation be even worse than that?

Well, it can. Let the function f(x) be Taylor friendly at y and let T(x) be any Taylor traitor at x = 0, such as the F(x) of Eq.(4). Then any function g(x) = f(x)+cT(x-y), where c is a non-zero constant, is necessarily a Taylor traitor at x = y. It is evident, in fact, that regardless of c, the formal Taylor series for g(x) converges to f(x):

(5)   

In this way, given any function f(x) which is Taylor friendly at a point y, one can construct a whole set S of distinct functions, each of which, at x = y, possesses all derivatives and gives rise to a convergent Taylor series. However, all such series are identical and converge to a common function which differs from any member of S! Notice also that the derivatives of g(x) at x = y coincide with those of f(x) and thus no longer need to be null. Moreover, by proper choice of c, the discrepancy between g(x) and its formal Taylor series can be made as large as desired. It therefore starts looking as though 'normal' functions with 'well-behaved' Taylor expansions might be a tiny minority !!!

To make the situation even worse, notice that all the functions

(6)   

exhibit at x = 0 qualitatively the same behavior, i.e., all their derivatives exist and are equal to zero (see Appendix B). Consequently, any function of the form

(7)   

with constant coefficients ck and sk behaves at x = 0 like the F(x) of Eq.(4) and has there a vanishing formal Taylor series.

This extends even to convergent infinite series, giving rise to a set of Taylor traitors of a staggering complexity. Since, except for fortuitous cancellations, the sum of a Taylor friendly function with any Taylor traitor is again a Taylor traitor, Taylor traitors prevail by an amazingly wide margin.

4. An application to physics

The Author had to face nasty Taylor traitors for the first time when studying thermodynamic functions by the methods of statistical physics. In that context the temperature dependence of physical quantities is often expressed through combinations of functions of the Boltzmann type

(8)   

with T denoting the absolute temperature, k the Boltzmann constant and E the energy differences between the system's energy levels. As one approaches absolute zero temperature, temperature derivatives of any order of any such physical quantity approach zero. The fact that functions (8) are Taylor traitors at T = 0 implies that it is very difficult to estimate the behavior of physical systems at low or moderate temperatures by extrapolating ultra-low temperature data.

Appendix A. Counter-intuitive trappings of Taylor series expansions

The basic idea behind the Taylor series is extremely simple. If, having selected a fixed point x = y within the interval I = (a,b), we replace f(x) by f(y), we have a zero's order approximation which, given the continuity of f(x) in I, can not be quite bad in at least some small neighborhood of y.

Since f(x) has in I a derivative, we can go a step further and incorporate the slope of f(x) at y into our approximation, replacing it by the expression f(y)+f'(y)(x-y). For 'reasonable' functions f(x) we expect such a linear approximation to be much better than the previous one.

When f(x) has in I derivatives up to the n-th order, one can proceed along this way and find an n-th order polynomial p(x) whose first n+1 derivatives (including the 0-th) coincide at y with those of f(x). It turns out that such a polynomial is unique and that it is given by the first term on the right-hand-side of Eq.(2). Again, for 'reasonable' functions f(x) one expects that as n increases, the polynomials p(x) become ever better approximations to f(x). When f(x) is expressed as the sum of p(x) and a residue R(x), like in Eq.(2), such an expectation amounts to saying that when n increases to infinity, R(x) converges to zero.

Alas, human intuition is often fallible and therefore can never replace a mathematical proof. A rigorous analysis shows that our intuition can in this case crash in two ways:

(1) There may be a convergence of R(x) to zero for a range of x-values close enough to y, but when the distance |x-y| exceeds some radius of convergence, R(x) typically decreases until some value of n and then starts to diverge to infinity!

Nice and well-understood examples of this kind of behavior can be found among functions which have an analytical continuation in the complex plane with a discrete set of poles. In such cases the radius of convergence equals the distance between the complex point (y,0) and the nearest complex pole. Thus the function 1/(1+x*x) has an analytical continuation with poles at +i and -i so that its Taylor expansion around, let us say, y = 1 converges only for x lying in the interval (1-√2,1+√2).

In any case, when the residues do not converge to zero, the limit function u(x) of Eq.(1) does not exist and there is nothing more to discuss.

(2) A more tricky case arises when all the derivatives of f(x) at y exist but are all zero. The approximating function u(x) of Eq.(1) then exists as well and is identically null.

A naive student might conclude that, in this case, f(x) must be zero, too. The functions of Eq.(6) illustrate that this is not at all true: In fact, they are equal to zero only at the origin and nowhere else, and yet all their derivatives at the origin are null! Recognizing this, the student might declare that such functions are 'crazy'. In doing so, however, he would unwittingly illustrate an important social aspect of mathematical logic: a mathematician is absolutely free to propose any definition he/she pleases but only few such definitions, though 'valid', have any chance of being accepted by other mathematicians.

Appendix B. Derivatives of Fk(x) at the origin

First, let us prove that, for any polynomial P(z) and any k = 1, 2, 3, ...

(B1)   

Let n be the degree of P(z) and assume z>0. From the power-series expansion of exp(z) it follows that

(B2)   

where Q(z) is the approximating polynomial of degree k(n+1). Since the degree of Q(z) is larger than that of P(z), we have

(B3)   

Combining this with (B2), one obtains

(B4)   

which implies (B1).

Consider now the derivatives of the functions defined by Eq.(5), assuming x>0.
We claim that they are all of the type

(B5)   

where Pν(z) is a polynomial.

The claim is certainly true for ν = 0 since then P0(z) = 1. Let us therefore assume that it holds for some ν and proceed to prove it for ν+1 :

(B6)   

where P'ν(z) is the derivative of Pν(z) which, of course, is also a polynomial in z. Substituting 1/x by z, one verifies that the above claim holds and even obtains a recursive relation for the polynomials:

(B7)   

Returning to (B5), using again the substitution z = 1/x, and applying (B1), one obtains

(B8)   

Since the functions defined by Eq.(5) are all even, their derivatives at -x are

(B9)   

This makes it possible to extend (B8) to the limit from the right

(B10)   

Since the limits from the right and from the left are the same, we have finally the desired result

(B11)   

It is interesting to notice that the functions defined in Eq.(6) are all continuous and have continuous derivatives of any order at x = 0. Despite the presence of the absolute value in their definition, they therefore do not exhibit any type of singularity at the origin, except for the fact that they are Taylor traitors there. As such, they can be used also as a nice exercise on singularities.

References

  • Arfken G.B., Weber H., Mathematical Methods for Physicists, Academic Press, 5th edition 2000.
  • Courant R., Differential and Integral Calculus, Interscience Publishers 1970.
  • Kolmogorov A.N., Fomin S.V., Introductory Real Analysis, Prentice Hall 1970.
  • Abramowitz M., Stegun A., Editors, Handbook of Mathematical Functions,
    National Bureau of Standards 1964, Dover Publications, New York 1972.
    Reprint edition, John Wiley & Sons 1993.
TOP | Other Math articles in Stan's Library | SCIENCE Links Stan's Courses | Stan's HUB | TOP 
   
Copyright ©2006 Stanislav Sykora    DOI: 10.3247/SL1Math06.000 Designed by Stan Sýkora