2.1 InfinitesimalsActually mathematicians have invented many different logical systems for working with infinity, and in most of them infinity does come in different sizes and flavors. Newton, as well as the German mathematician Leibniz who invented calculus independently [1], had a strong intuitive idea that calculus was really about numbers that were infinitely small: infinitesimals, the opposite of infinities. For instance, consider the number 1.1^{2}=1.21. That 2 in the first decimal place is the same 2 that appears in the expression 2t for the derivative of t^{2}. [1] There is some dispute over this point. Newton and his supporters claimed that Leibniz plagiarized Newton's ideas, and merely invented a new notation for them. Figure B shows the idea visually. The line connecting the points (1, 1) and (1.1, 1.21) is almost indistinguishable from the tangent line on this scale. Its slope is (1.211)/(1.11) = 2.1, which is very close to the tangent line's slope of 2. It was a good approximation because the points were close together, separated by only 0.1 on the t axis. If we needed a better approximation, we could try calculating 1.01^{2}=1.0201. The slope of the line connecting the points (1, 1) and (1.01, 1.0201) is 2.01, which is even closer to the slope of the tangent line. Figure C. A geometrical interpretation of the derivative of t^{2}. Another method of visualizing the idea is that we can interpret x=t^{2} as the area of a square with sides of length t, as suggested in Figure C. We increase t by an infinitesimally small number dt. The d is Leibniz's notation for a very small difference, and dt is to be read as a single symbol, “deetee,” not as a number d multiplied by a number t. The idea is that dt is smaller than any ordinary number you could imagine, but it's not zero. The area of the square is increased by dx = 2tdt +dt^{2}, which is analogous to the finite numbers 0.21 and 0.0201 we calculated earlier. Where before we divided by a finite change in t such as 0.1 or 0.01, now we divide by dt, producing \begin{align} \frac{dx}{dt} &= \frac{2tdt +dt^2}{dt} \\ &= 2t+dt \end{align}
for the derivative. On a graph like Figure B, dx/dt is the slope of the tangent line: the change in x divided by the changed in t. But adding an infinitesimal number dt onto 2t doesn't really change it by any amount that's even theoretically measurable in the real world, so the answer is really 2t. Evaluating it at t=1 gives the exact result, 2, that the earlier approximate results, 2.1 and 2.01, were getting closer and closer to. Example 9To show the power of infinitesimals and the Leibniz notation, let's prove that the derivative of t^{3} is 3t^{2}: \begin{align} \frac{dx}{dt} &= \frac{(t+dt)^3t^3}{dt} \\ &= \frac{3t^2dt + 3tdt^2 + dt^3}{dt} \\ &= 3t^2 + ... , \end{align}
where the dots indicate infinitesimal terms that we can neglect. This result required significant sweat and ingenuity when proved on page 140 by the methods of chapter 1, and not only that but the old method would have required a completely different method of proof for a function that wasn't a polynomial, whereas the new one can be applied more generally, as we'll see presently in examples 1013. It's easy to get the mistaken impression that infinitesimals exist in some remote fairyland where we can never
touch them. This may be true in the same artsyfartsy sense that we can never truly understand \(\sqrt{2}\), because
its decimal expansion goes on forever, and we therefore can never compute it exactly. But in practical work,
that doesn't stop us from working with \(\sqrt{2}\). We just approximate it as, e.g., 1.41. Infinitesimals
are no more or less mysterious than irrational numbers, and in particular we can represent them concretely on a computer.
If you go to Let's use Inf to verify that the derivative of t^{3}, evaluated at t=1, is equal to 3, as found by plugging in to
the result of Example 9. The : ((1+d)^31)/d 3+3d+d^2 As claimed, the result is 3, or close enough to 3 that the infinitesimal error doesn't matter in real life.
It might look like Inf did this example by using algebra to simplify the expression, but in fact Inf doesn't
know anything about algebra. One way to see this is to use Inf to compare : d<1 true : d<0.01 true : d<0.0000001 true : d<0 false If Example 10In example 5, we made a rough numerical check to see if the differentiation rule t^{k} → kt^{k1}, which was proved on p. 140 of the pdf version for k=1, 2, 3, ..., was also valid for k=1, i.e., for the function x=1/t. Let's look for an actual proof. To find a natural method of attack, let's first redo the numerical check in a slightly more suggestive form. Again approximating the derivating at t=3, we have\[ \frac{dx}{dt} \approx \left(\frac{1}{3.01}\frac{1}{3}\right)\left(\frac{1}{0.01}\right) . \]
Let's apply the gradeschool technique for subtracting fractions, in which we first get them over the same denominator: \[ \frac{1}{3}\frac{1}{3.01} = \frac{33.01}{3\times3.01} . \]
The result is \begin{align} \frac{dx}{dt} &\approx \left(\frac{0.01}{3\times3.01}\right)\left(\frac{1}{0.01}\right) =\frac{1}{3\times3.01} . \end{align}
Replacing 3 with t and 0.01 with dt, this becomes \begin{align} \frac{dx}{dt} = \frac{1}{t (t+dt)} = t^{2}+... \end{align}
Figure D. Graphs of sin t, and its derivative cos t. Example 11The derivative of x=sin t, with t in units of radians, is \[ \frac{dx}{dt} = \frac{\sin(t+dt)\sin t}{dt} , \]
and with the trig identity sin(α+β) = sinαcosβ + cosαsinβ, this becomes \[ = \frac{\sin t \: \cos dt + \cos t \: \sin dt  \sin t}{dt} . \]
Applying the smallangle approximations sin u ≈ u and cos u ≈ 1, we have \begin{align} \frac{dx}{dt} &= \frac{\cos t dt}{dt} + ... \\ &= \cos t + ... , \end{align}
where “...” represents the error caused by the smallangle approximations. This is essentially all there is to the computation of the derivative, except for the remaining technical point that we haven't proved that the smallangle approximations are good enough. In Example 9, when we calculated the derivative of t^{3}, the resulting expression for the quotient dx/dt came out in a form in which we could inspect the “...” terms and verify before discarding them that they were infinitesimal. The issue is less trivial in the present example. This point is addressed more rigorously on page 141. Figure D shows the graphs of the function and its derivative. Note how the two graphs correspond. At t=0, the slope of sin t is at its largest, and is positive; this is where the derivative, cos t, attains its maximum positive value of 1. At t=π/2, sin t has reached a maximum, and has a slope of zero; cos t is zero here. At t=π, in the middle of the graph, sin t has its maximum negative slope, and cos t is at its most negative extreme of 1. Physically, sin t could represent the position of a pendulum as it moved back and forth from left to right, and cos t would then be the pendulum's velocity. Example 12What about the derivative of the cosine? The cosine and the sine are really the same function, shifted to the left or right by π/2. If the derivative of the sine is the same as itself, but shifted to the left by π/2, then the derivative of the cosine must be a cosine shifted to the left by π/2: \begin{align} \frac{d cos t}{dt} &= \cos(t+\pi/2) \\ &= \sin t . \end{align}
The next example will require a little trickery. By the end of this chapter you'll learn general techniques for cranking out any derivative cookbookstyle, without having to come up with any tricks. Figure E. The function x = 1/(1t). Example 13◊ Find the derivative of 1/(1t), evaluated at t=0. ◊ The graph shows what the function looks like. It blows up to infinity at t=1, but it's well behaved at t=0, where it has a positive slope. For insight, let's calculate some points on the curve. The point at which we're differentiating is (0, 1). If we put in a small, positive value of t, we can observe how much the result increases relative to 1, and this will give us an approximation to the derivative. For example, we find that at t=0.001, the function has the value 1.001001001001, and so the derivative is approximately (1.0011)/(.0010), or about 1. We can therefore conjecture that the derivative is exactly 1, but that's not the same as proving it. But let's take another look at that number 1.001001001001. It's clearly a repeating decimal. In other words, it appears that \[ \frac{1}{11/1000} = 1+\frac{1}{1000}+\left(\frac{1}{1000}\right)^2+... , \]
and we can easily verify this by multiplying both sides of the equation by 11/1000 and collecting like powers. This is a special case of the geometric series \[ \frac{1}{1t} = 1+t+t^2+... , \]
which can be derived [2] by doing synthetic division (the equivalent of long division for polynomials), or simply verified, after forming the conjecture based on the numerical example above, by multiplying both sides by 1t. [2] As a technical aside, it's not necessary for our present purposes to go into the issue of how to make the most general possible definition of what is meant by a sum like this one which has an infinite number of terms; the only fact we'll need here is that the error in finite sum obtained by leaving out the “...” has only higher powers of t. This is taken up in more detail in ch. 7. Note that the series only gives the right answer for t<1. E.g., for t=1, it equals 1+1+1+…, which, if it means anything, clearly means something infinite. As we'll see in Section 2.2, and have been implicitly assuming so far, infinitesimals obey all the same elementary laws of algebra as the real numbers, so the above derivation also holds for an infinitesimal value of t. We can verify the result using Inf: : 1/(1d) 1+d+d^2+d^3+d^4 Notice, however, that the series is truncated after the first five terms. This is similar to the truncation that happens when you ask your calculator to find \(\sqrt{2}\) as a decimal. The result for the derivative is \begin{align} \frac{dx}{dt} &= \frac{\left(1+dt+{dt}^2+...\right)1}{1+dt1} \\ &= 1+... . \end{align}
