f(x) = sin(x)/x
= 0
c f(c)
NaN
A notebook for this material: ipynb
The concept of a limit is what makes calculus possible. Limits are used to define the two main concepts of calculus: derivatives and integrals. The formal definition of a limit is a bit difficult to appreciate and grasp. It involves an “\(\epsilon-\delta\)” formulation:
The limit of \(f(x)\) as \(x\) approaches \(c\) is \(L\) if for every \(\epsilon > 0\) there exists a \(\delta > 0\) such that whenever \(0 < |x - c| < \delta\) then \(|f(x) - L| < \epsilon\).
When a limit exists, we write:
\[ \lim_{x \rightarrow c} f(x) = L. \]
However the intuition of limits is more accessible. This intuition was known as early as the Greeks: Archimedes figured out the area under a parabola over 2000 years ago by the method of exhaustion, a limiting process. Fermat in 1629 essentially took a limit to find the slope of a tangent line to a polynomial curve. Newton in the late 1600s, exploited the idea in his development of calculus (as did Leibnez). Yet it wasn’t until the 1800s that Bolzano, Cauchy, and Weierstrass put the idea on a firm footing, as above.
To get the intuition behind a limit we replace the absolute value inequalities in the definition with “close” and read as follows: as \(x\) gets “close” to \(c\) (but not equal), then \(f(x)\) should get “close” to \(L\).
Before beginning, it should be noted that for most values of \(c\), the answer is simply \(L=f(c)\). This is because most all the functions encountered will be continuous which is basically a statement that for any \(c\) the limit is given through \(L=f(c)\).
For example, let \(f(x) = \sin(x)/x\). For any \(c\) except \(0\), \(f(x)\) is continuous and the limit exists and is simply \(f(c)\). In particular, at \(c=1\) we have
\[ \lim_{x \rightarrow 1} \frac{\sin(x)}{x} = \frac{\sin(1)}{1}. \]
However, at \(c=0\) we can’t say \(f(x)\) is continuous – it isn’t even defined. When \(c\) is non-zero, the function \(f(x)\) is simply the ratio of two continuous functions. Such a ratio will be continuous except when dividing by \(0\), as is the case when \(c=0\)
When discussing limits of combinations of continuous functions it is generally true that the limit is found by evaluating the function at \(c\) unless this yields an indeterminate form which is of the type: \(0/0\), \(\infty/\infty\), \(0 \cdot \infty\), \(\infty - \infty\), \(0^0\), \(1^\infty\), and \(\infty^0\). Such forms can have limits of many different values, depending on the functions involved.
For this particular problem what does Julia
return when we try to evaluate \(f(c)\)?
f(x) = sin(x)/x
= 0
c f(c)
NaN
The value NaN
arises when a floating-point computation is indeterminate.
So, does \(\sin(x)/x\) have a limit at \(c=0\)? If it does, it isn’t simply \(f(0)\).
NaN
values
Operations involving NaN
will also return NaN
. In this example, the ratio defining \(f(x)\) is like \(0/0\) when \(c=0\) – an indeterminate form. The value NaN
is returned by most of the indeterminate forms listed above, but not all. For example, 0^0
is 1
to Julia
:
“It proved more useful to adhere to the simply stated rule *anything raised to the \(0\) gives \(1\) then special case \(0^0\). Some comments by the “father of floating point” can be read near the end of this.”
A graphical approach to this problem can show if a function is likely to have a limit. The basic idea is to simply make a graph around \(c\) and look to see if \(f(x)\) gets close to some value as \(x\) gets close to \(c\):
using MTH229
using Plots
plotly()
Plots.PlotlyBackend()
f(x) = sin(x)/x
= 0, pi/2
c, delta plot(f, c - delta, c + delta)
From the graph it looks like the function value should be \(1\) at \(0\). Which is indeed the case, as was understood as early as 1735 by Euler.
Recall that the graph is made by choosing a large number of points between the limits and then evaluating the function at each of these points. The values \((x,f(x))\) are then connected like a dot-to-dot figure. If a value is NaN
, the line will break to indicate this. In the above graph, it appears that \(0\) is not one of the points sampled, as no break is indicated. The graph gives the appearance that \(f(x)\) is continuous, though we know that isn’t the case, as \(f(x)\) is undefined at \(x=0\).
Consider now the following limit
\[ \lim_{x \rightarrow 2} \frac{x^2 - 5x + 6}{x^2 +x - 6} \]
Noting that this is a ratio of continuous functions, we first check whether there is anything to do:
f(x) = (x^2 - 5x + 6) / (x^2 + x - 6)
= 2
c f(c)
NaN
The NaN
indicates that this function is indeterminate at \(c=2\). A quick plot gives us an idea that the limit exists and is roughly \(-0.2\):
= 2, 1
c, delta plot(f, c - delta, c + delta)
The graph looks continuous. In fact, the value \(c=2\) is termed a removable discontinuity as redefining \(f(x)\) to be \(-0.2\) when \(x=2\) results in a continuous function.
As an aside, you can redefine f
using the “ternary operator”:
f(x) = x == 2.0 ? -0.2 : (x^2 - 5x + 6) / (x^2 + x - 6)
f (generic function with 1 method)
This particular case is a textbook example: one can easily factor \(f(x)\) to get:
\[ f(x) = \frac{(x-2)(x-3)}{(x-2)(x+3)} \]
Written in this form, we clearly see that this is the same function as \(g(x) = (x-3)/(x+3)\) when \(x \neq 2\). The function \(g(x)\) is continuous at \(x=2\). So were one to redefine \(f(x)\) at \(x=2\) to be \(g(2) = (2-3)/(2+3) = -0.2\) it would be made continuous, hence the term removable singularity.
By graphing near \(1\), find the limit:
\[ L = \lim_{x \rightarrow 1} \frac{x^2−3x+2}{x^2−6x+5} \]
Graphically look at the following limit
\[ L = \lim_{x \rightarrow -2} \frac{x}{x+1} \frac{x^2}{x^2 + 4} \]
What is the value?
Graphically investigate the limit
\[ L = \lim_{x \rightarrow 0} \frac{e^x - 1}{x}. \]
What is the value of \(L\)?
Graphically investigate the limit
\[ \lim_{x \rightarrow 0} \frac{\cos(x) - 1}{x}. \]
The limit exists, what is the value?
The following limit is commonly used:
\[ L = \lim_{h \rightarrow 0} \frac{e^{x + h} - e^x}{h}. \]
Factoring out \(e^x\) from the top and using rules of limits this becomes,
\[ L = e^x \lim_{h \rightarrow 0} \frac{e^h - 1}{h}. \]
What is \(L\)?
The following limit is commonly used:
\[ \lim_{h \rightarrow 0} \frac{\sin(x + h) - \sin(x)}{h} = L. \]
The answer should depend on \(x\), though it is possible it is a constant. Using a double angle formula and the rules of limits, this can be written as:
\[ L = \cos(x) \lim_{h \rightarrow 0}\frac{\sin(h)}{h} + \sin(x) \lim_{h \rightarrow 0}\frac{\cos(h)-1}{h}. \]
Using the last result, what is the value of \(L\)?
The function \(f(x) = (x^2 - 4)/(x-2)\) has a removable singularity at \(x=2\). What value would you redefine \(f(2)\) to be, to make \(f\) a continuous function?
Let’s look at the function \(f(x) = x \sin(1/x)\). A graph around \(0\) can be made with:
f(x) = x == 0 ? NaN : x * sin(1/x)
= 0, 1/4
c, delta plot(f, c - delta, c + delta)
g(x) = abs(x); h(x) = - abs(x)
plot!(g)
plot!(h)
This graph clearly oscillates near \(0\). To the graph of \(f\), we added graphs of both \(g(x) = |x|\) and \(h(x) = - |x|\). From this graph it is easy to see by the “squeeze theorem” that the limit at \(x=0\) is \(0\). Why?
The highly oscillatory function
\[ f(x) = x^2 (\cos(1/x) - 1) \]
has a removable singularity at \(x=0\). What value would you redefine \(f(0)\) to be, to make \(f\) a continuous function?
Some functions do not have a limit. Make a graph of \(\sin(1/x)\) from \(0.0001\) to \(1\) and look at the output. Why does a limit not exist?
Consider again the limit of \(f(x) = \sin(x)/x\), whose answer is not obvious from the formula, but from the graph we could see that \(f(x)\) goes to \(L=1\) as \(x\) goes to \(0\).
We can further illustrate how the function gets close to the limit of \(1\) by restricting the graph to values near \(c\):
f(x) = sin(x) / x
= 0, 1e-1
c, delta plot(f, c-delta, c+delta)
We see a similar picture, with an answer of \(1\). A smaller \(\delta\) yields a similar picture:
= 0, 1e-3
c, delta plot(f, c-delta, c+delta)
The graphs have a similar shape – but different scales. A closer look at the \(y\) axis shows that for delta = 1e-1
(or \(1/10\)) the range of \(y\) values is about \(1/1000\) and for delta = 1e-3
it is about \(1/10,000,000\).
We can be more precise. The following function estimate the length of the range of the plotted values:
function epsilon(f, c, delta)
= range(c - delta, stop=c + delta, length=100)
xs = f.(xs) # like drawing a plot
ys = extrema(ys) # minimum and maximum
m, M - m
M end
epsilon(f, 0, 1e-1), epsilon(f, 0, 1e-3)) (
(0.0016656634810520154, 1.6664965329926673e-7)
Numerically we see as \(x\) gets close to \(0\) (delta
gets small) \(f(x)\) gets close to a limit (epsilon
gets small).
In fact for this problem we can be a bit more precise, as one can infer that \(\epsilon/(1/6 \cdot\delta^2)\) is basically 1.
We can empirically verify this for one value of \(\delta\) with
= 1e-2
delta epsilon(f, 0, delta) / (1/6*delta^2)
0.9998929696086734
The ratio is basically \(1\), as advertised.
Of course, using a comprehension we can do this comparison for different sized values of \(\delta\) at once:
= [1/10, 1/10^2, 1/10^3, 1/10^4, 1/10^5]
deltas epsilon(f, 0, delta)/(1/6*delta^2) for delta in deltas] [
5-element Vector{Float64}:
0.999398088631209
0.9998929696086734
0.9998979197956005
0.999897964426566
0.9999068240063025
This isn’t quite what needs to be shown in the proof of a limit: we are essentially finding an epsilon for a given delta rather than a delta for a given epsilon. It does suggest that were we to attempt a formal proof, we should look at \(\delta\) which is basically \(\sqrt{6 \epsilon}\).
Consider the limit
\[ L = \lim_{h \rightarrow 0} \frac{(1 + h)^2 - 1^2}{h} \]
Guess the relationship between epsilon
and delta
A table can be used to investigate limits of functions. The basic idea is that if
\[ \lim_{x \rightarrow c} f(x) = L \]
then for values of \(x\) close to \(c\), we should have that the values of \(f\) for these \(x\) are close to \(L\). For example, let’s look at this limit again:
\[ \lim_{x \rightarrow 2} \frac{(x+2)(x-3)}{(x+2)(x+3)} \]
which we know is simply \(-1/5\). To approach this problem using a table we would first need to produce some values of \(x\) getting close to \(2\). Here we get values approaching 2 from above:
= [1/10, 1/100, 1/1000, 1/10000, 1/100000] # or [1/10^i for i in 1:5]
hs = 2 .+ hs xs
5-element Vector{Float64}:
2.1
2.01
2.001
2.0001
2.00001
The corresponding \(y\) values are found by applying \(f\) to each:
f(x) = ((x+2)*(x-3)) / ((x+2)*(x+3))
= f.(xs) ys
5-element Vector{Float64}:
-0.17647058823529413
-0.1976047904191617
-0.19976004799040195
-0.19997600047999037
-0.19999760000480002
The ys
are clearly getting closer to \(-0.2\), as expected.
The pairs of values xs
and ys
can be more naturally displayed with a table, the square-bracket notation is useful here to put the values into two columns:
[xs ys]
5×2 Matrix{Float64}:
2.1 -0.176471
2.01 -0.197605
2.001 -0.19976
2.0001 -0.199976
2.00001 -0.199998
The above investigates the right limit, as the values chosen for xs
are always more than 2 but getting closer. The left limit might have used xs
defined with:
= [2 - 1/10, 2 - 1/100, 2 - 1/1000, 2 - 1/10000, 2 - 1/100000]
xs = f.(xs)
ys [xs ys]
5×2 Matrix{Float64}:
1.9 -0.22449
1.99 -0.202405
1.999 -0.20024
1.9999 -0.200024
1.99999 -0.200002
We see the same phenomenon – \(f(x)\) gets close to \(-0.2\) as \(x\) gets close to \(c=2\) from the left or the right.
The three steps above are bit tedious to type for each problem, so for convenience we encapsulate them into a function (available in the MTH229
package) call lim
:
lim(f, 2)
2.1 -0.17647058823529413 2.01 -0.1976047904191617 2.001 -0.19976004799040195 2.0001 -0.19997600047999037 2.00001 -0.19999760000480002 2.000001 -0.19999976000004796 ⋮ ⋮ c L? ⋮ ⋮ 1.999999 -0.20000024000004799 1.99999 -0.20000240000480002 1.9999 -0.2000240004800096 1.999 -0.20024004800960188 1.99 -0.20240480961923846 1.9 -0.22448979591836735
The lim
function has keywords n
and m
to adjust how close x
gets to c
. More importantly it has the keyword dir
to adjust the direction. The default above is used, which shows function values on the both the left and right. The numbers are always shown in decreasing order with c
in the middle, not typically shown unless there is rounding. The keyword argument dir="+"
only shows values with x > c
and for dir="-"
only values with x < c
are shown. (For `dir=“-” the table is read bottom to top.)
Now consider the limit of \(x^x\) as \(x\) goes to \(0\) from the right. Though Julia
– following a standard – defines this function at \(0\), it is of indeterminate form so should be investigated.
f(x) = x^x
= 0
c f(c)
1
And we see that the output from lim
agrees with an answer of \(1\) for the right limit:
lim(f, c; dir="+")
0.1 0.7943282347242815 0.01 0.954992586021436 0.001 0.9931160484209338 0.0001 0.9990793899844618 1.0e-5 0.9998848773724686 1.0e-6 0.9999861845848758 ⋮ ⋮ c L?
For our next example, we compute numerically (a tedious problem to do algebraically)
\[ \lim_{x \rightarrow 25} \frac{\sqrt{x} - 5}{\sqrt{x-16} - 3} \]
f(x) = (sqrt(x) - 5) / (sqrt(x-16) - 3)
= 25
c lim(f, c)
25.1 0.6010616008415922 25.01 0.6001066157341047 25.001 0.6000106661569936 25.0001 0.6000010666430725 25.00001 0.6000001065281493 25.000001 0.6000000122568625 ⋮ ⋮ c L? ⋮ ⋮ 24.999999 0.5999999893418593 24.99999 0.5999998930988751 24.9999 0.5999989333095628 24.999 0.5999893328251538 24.99 0.5998932823395853 24.9 0.5989282061387314
A quick investigation of the table demonstrates the limit should be \(0.6\).
A very important limit in calculus is the derivative formula, written here to emphasize the secant line aspect:
\[ \lim_{x \rightarrow c} \frac{f(x) - f( c)}{x-c}. \]
Let’s take \(c = 1\) and \(f(x) = x^x\) and compute the limit above:
f(x) = x^x
= 1;
c g(x) = (f(x) - f(c)) / (x - c)
lim(g, c)
1.1 1.1053424105457568 1.01 1.010050334174161 1.001 1.00100050033347 1.0001 1.0001000049998368 1.00001 1.0000100000452363 1.000001 1.0000010000889006 ⋮ ⋮ c L? ⋮ ⋮ 0.999999 0.9999990000221217 0.99999 0.9999900000546837 0.9999 0.999900004999942 0.999 0.9990004996667237 0.99 0.9900496674924866 0.9 0.9046742391703779
The left limit has a similar tale. We take this as strong evidence that the limit is \(1\)
Find the limit as \(x\) goes to \(2\) of
\[ f(x) = \frac{3x^2 - x -10}{x^2 - 4} \]
Find the limit as \(x\) goes to \(-2\) of
\[ f(x) = \frac{\frac{1}{x} + \frac{1}{2}}{x^3 + 8} \]
Find the limit as \(x\) goes to \(27\) of
\[ f(x) = \frac{x - 27}{x^{1/3} - 3} \]
Find the limit
\[ L = \lim_{x \rightarrow 0}(1+x)^{1/x}. \]
Find the limit
\[ L = \lim_{x \rightarrow \pi/2} \frac{\tan (2x)}{x - \pi/2} \]
There are several properties of limits that allow one to break down more complicated problems into smaller subproblems. For example,
\[ \lim (f(x) + g(x)) = \lim f(x) + \lim g(x) \]
is notation to indicate that one can take a limit of the sum of two function or take the limit of each first, then add and the answer will be unchanged, provided all the limits in question exist.
Use one or the either to find the limit of \(f(x) = \sin(x) + \tan(x) + \cos(x)\) as \(x\) goes to \(0\).
Look at the figure of a sector of a circle of radius 1 and the subtended section.
Let \(f(\theta)\) be the area of the triangle and \(g(\theta)\) the shaded region. What is the limit
\[ \lim_{\theta \rightarrow 0+} \frac{f(\theta)}{g(\theta)}? \]
Does this function have a limit as \(h\) goes to \(0\) from the right (that is, assume \(h>0\))?
\[ \frac{h^h - 1}{h} \]
Is the form \(0^0\) really indeterminate?
Consider this limit:
\[ \lim_{x \rightarrow 0+} x^{1/\log_k(x)} = L. \]
In Julia
, \(\log_k(x)\) is found with log(k,x)
. The default, log(x)
takes \(k=e\) so gives the natural log. So, we would define f
, for a given k
, with
= 10 # say. Replace with actual value
k f(x) = x^(1/log(k, x))
f (generic function with 1 method)
Consider different values of \(k\) to see if the limit depends on \(k\) or not. What is \(L\)?
Next, consider this limit:
\[ \lim_{x \rightarrow 0+} x^{k\cdot x} = L. \]
Consider different values of \(k\) to see if this limit depends on \(k\) or not. What is \(L\)?
The concept of a limit can be extended. For example, the concept of a limit as \(n\) goes to infinity for some sequence of values parameterized by \(n\).
Let’s compute \(\pi\) as the circumference of a circle of radius 1 by approximating the circle by an inscribed regular polygon with \(n\) sides. The length, \(k\), of a given side is
\[ k = 2 \sin(\frac{2\pi}{2n}) \]
As can be seen by taking the isoceles triangle with angle \(2\pi/n\) and dropping a horizontal with opposite length 1/2 the entire length.
Thus the total length is
\[ l_n = n \cdot 2 \sin(\frac{2\pi}{2n}) \]
As \(n\) goes to \(\infty\) this should go to the circumference of the circle of radius 1 or \(2\pi\). (This was used as early as the Egyptians with an octagon to approximate \(\pi\).)
Let’s see.
= [10^i for i in 1:15]
n_to_infinity l(n) = n * 2sin( (2pi)/(2n) )
l(n) for n in n_to_infinity] [
15-element Vector{Float64}:
6.180339887498948
6.282151815625658
6.283174971759127
6.28318520382533
6.283185306146043
6.283185307169251
6.283185307179482
6.2831853071795845
6.283185307179586
6.283185307179585
6.283185307179586
6.283185307179586
6.283185307179586
6.283185307179587
6.283185307179586
To compare to \(2\pi\) we can divide instead:
l(n)/(2pi) for n in n_to_infinity ] [
15-element Vector{Float64}:
0.983631643083466
0.9998355147105487
0.9999983550667451
0.9999999835506593
0.9999999998355065
0.9999999999983552
0.9999999999999835
0.9999999999999997
1.0
0.9999999999999999
1.0
1.0
1.0
1.0000000000000002
1.0
As the ratio has a limit of \(1\) we conclude that \(l(n)\) goes to \(2\pi\).
There isn’t much difference to the above than what we did before, except we take increasing larger values for \(n\), not values getting close to 0 for \(x\).
Use an inscribed octagon to approximate \(\pi\) (e.g., take \(n=8\) and look at \(l(n)/2\), with \(l\) defined above). What do you get?
Archimedes used interior \(96\)-gons and exterior ones to estimate \(\pi\) from above and below. The circumference of the exterior polygon is:
L(n) = n * 2 * tan((2*pi) / (2*n))
L (generic function with 1 method)
What is the difference between \(L(96)/2\) and \(l(96)/2\)?
Jacob Bernoulli looked at the limit
\[ \lim_{x \rightarrow \infty} (1 + \frac{1}{x})^x \]
What value did he find?
Euler looked at \(\sin(x)/x\) in his solution to the “Basel” problem, that is finding the sum of:
\[ 1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2} + \cdots = \lim_{n \rightarrow \infty} \sum_n \frac{1}{i^2}. \]
Euler rewrote a series expansion for \(\sin(x)/x\) to get his famous answer of \(\pi^2/6\). Using this function
basel(n) = sum( [1/i^2 for i in 1:n] )
basel (generic function with 1 method)
how big must \(n\) be so that pi^2/6 - basel(n) < 1e-3
?
The sum \(1 + 1/2 + 1/3 + 1/4 + \cdots\) does not converge. In fact, the sum of the first \(n\) terms gets closer and closer to \(\log(n)\) plus a constant. That is, this function does have a limit as \(n\) goes to \(\infty\):
euler_mascheroni(n) = sum([1/i for i in 1:n]) - log(n)
euler_mascheroni (generic function with 1 method)
Use this:
euler_mascheroni(i) for i in (10.0).^(1:7)] [
to find an answer to \(6\) decimal points.
A related limit to \(\sin(x)/x \rightarrow 0\) is:
\[ \lim_{x \rightarrow 0} \frac{1-\cos(x)}{x^2} = \frac{1}{2}. \]
Related in that they are used to approximate related functions near \(0\): \(\sin(x) \approx x\) and \(1 - \cos(x) \approx (1/2) x^2\). A graphic shows the latter approximation:
plot([x -> 1 - cos(x), x -> x^2/2], -pi, pi)
Note in the figure how the parabola tracks the shape of the transformed cosine function very well near \(x=0\) but not necessarily far from \(0\).
Numerically, we have a different story. We see that there are limitations to our approach to finding limits that show up in analyzing this.
Here is a first attempt
f(x) = (1 - cos(x))/x^2
= 0
c lim(f, c; n=10, dir="+")
0.1 0.49958347219741783 0.01 0.4999958333473664 0.001 0.49999995832550326 0.0001 0.4999999969612645 1.0e-5 0.5000000413701854 1.0e-6 0.5000444502911705 1.0e-7 0.4996003610813205 1.0e-8 0.0 1.0e-9 0.0 1.0e-10 0.0 ⋮ ⋮ c L?
We notice something odd – the values ultimately become \(0\) when we just said they should become \(1/2\). At least for most of the output things look okay, but then something goes terribly wrong.
The culprit? Floating point approximation involves round off errors.
Let’s look at the two pieces. First the denominator:
top(x) = x^2
lim(top, c; n = 10, dir="+")
0.1 0.010000000000000002 0.01 0.0001 0.001 1.0e-6 0.0001 1.0e-8 1.0e-5 1.0000000000000002e-10 1.0e-6 1.0e-12 1.0e-7 9.999999999999998e-15 1.0e-8 1.0000000000000001e-16 1.0e-9 1.0e-18 1.0e-10 1.0000000000000001e-20 ⋮ ⋮ c L?
There is nothing here to speak of, save for some rounding. Julia
’s Float64
type follows the IEEE 754 floating point standard. Of the 64 bits, 1 is used for the sign (plus or minus) and 11 are used to store the exponent. See this informative blog post for more Anatomy of a floating-point number. As \(2^{11} = 2048\) roughly half are used for negative exponents, the other half for positive exponents. The range is from 1e-1022
to 1e1023
. We aren’t even close to the lower range with 1e-20
.
Now, let’s look at the numerator. The issue is the difference between \(\cos(x)\) and 1. Let’s look with the small values printed:
bottom(x) = 1 - cos(x)
lim(bottom, c; n = 10, dir="+")
0.1 0.0049958347219741794 0.01 4.999958333473664e-5 0.001 4.999999583255033e-7 0.0001 4.999999969612645e-9 1.0e-5 5.000000413701855e-11 1.0e-6 5.000444502911705e-13 1.0e-7 4.9960036108132044e-15 1.0e-8 0.0 1.0e-9 0.0 1.0e-10 0.0 ⋮ ⋮ c L?
Instead of giving a value that is roughly \(5 \cdot 10^{-(2n+1)}\), the value becomes \(0\) – not just numbers close to \(0\). Hence, when the numerator is divided by even the smallest of numbers, the answer is simply \(0\).
In general, we add to our few rules of thumb for computation with floating-point numbers:
If we subtract two like-sized quantities our answer may have dramatically reduced precision.
In this specific case by the time we get to \(10^{-8}\), the difference between \(\cos(x)\) and \(1\) is looking to be around 5e-17
. However, in floating point representation there are fundamental limits to how close different things can be. Of the 64 bits representing a number, 52 are used for the precision. (A number, \(s \cdot p \cdot 10^e\), is represented with a sign, the precision and an exponent.) This puts the restriction on what can be represented and ultimately gives a granularity if one looks too closely – without working harder. In this particular case, the floating point approximation for \(1\) and that for \(\cos(x)\) are eventually the same value – even if they are different mathematically.
The value
eps()
2.220446049250313e-16
measures how much larger the next representable number after \(1.0\) is from \(1.0\). (Of course, this has no answer in the real numbers, but floating point is a discrete approximation.)
What has happened with \(1-\cos(x)\) is the mathematical value of \(\cos(x)\) gets too close to 1 when \(x = 10^{-8}\) and so the difference is treated as \(0\) as the two numbers have the same representation. Since \(0\) divided by any non-zero number is zero, we get a reasonable answer for the at-first unexpected behavior.
So be careful, we can get too close when looking “close.”
Julia
has some functions for working with floating point numbers. Some of you might be thinking that since eps
is the difference to the next representable number larger than 1, what is the same for the next representable number less than one. The prevfloat
value gives this. Here we see the issue between \(10^{-7}\) and \(10^{-8}\):
prevfloat(1.0) < cos(1e-7), prevfloat(1.0) < cos(1e-8)
(false, true)
Floating point approximations differ depending on the location. Look at the difference between 1.0- prevfloat(1.0)
and nextfloat(1.0) - 1
. Then look at how small nextfloat(0.0)
is.
is eps() == nextfloat(1.0)-1
?
The bitstring
function prints the bit representation of a number. For real numbers, the first bit is the sign, the next 11 the exponent and the last 52 are for the precision. Let’s look at the values for a few:
bitstring(cos(1e-7))
bitstring(cos(1e-8))
bitstring(1.0)
"0011111111110000000000000000000000000000000000000000000000000000"
We see here how two different real numbers have the same floating point representation.
For fun, what is the difference between bitstring(-1.0)
and bitstring(1.0)
?
What is the difference between bitstring(NaN)
and bitstring(Inf)
? (These two are coded specially in floating point.)
The SymPy
package for Julia
provides a means to for Julia
users to interact with the SymPy
add-on for the Python
programming language. The SymPy package provides symbolic math features. One such feature is the ability to perform symbolically the limit of \(f(x)\) as \(x\) approaches \(c\).
This package is loaded by the MTH229
package.
The limit
function accesses these features. Its basic use is straightforward, just pass a symbolic expression, and indicate the variable going to c
. Symbolic expressions can be created by evaluating a function on a symbolic variable, the latter are created with the @syms
macro:
@syms x
f(x) = sin(x)/x
= 0
c limit(f(x), x=>c)
A limit at infinity can be computed using oo
for \(\infty\):
f(x) = (1 + 1/x)^x
= oo # oo is symbolic infinity. Can also use Inf.
c limit(f(x), x=>c)
The latter shows the results are not quite machine numbers. Rather, they are symbolic values. We aren’t discussing these here, but the values they represent are readily apparent.
The command @syms x
creates x
as a symbolic variable. The call f(x)
returns a symbolic expression. Symbolic expressions can also be created directly, as with sin(x)/x
.
The limit
function has one named argument, dir
, used to adjust if a left, right (the default) limit is sought. For example, this function has different left and right limits at 0:
f(x) = sign(x)
@syms x
limit(f(x), x=>0, dir="-"), limit(f(x), x=>0, dir="+")
(-1, 1)
The algorithm implemented in SymPy
for symbolic limits is quite powerful. It does not suffer from the floating point issues described before and gives exact values (though some coercion to floating point is possible). The following example shows this:
This function is pretty devious:
f(x) = 1/ x^(log(log(log(log(1/x)))) - 1)
f (generic function with 1 method)
It has a right limit at \(c=0\), but not what is expected, which might appear to be 0:
lim(f, 0; m=2, n=8, dir="+") # need to get close to 0 to start
0.01 0.0001910870074860089
0.001 5.6027494777652786e-5
0.0001 1.2464663061530677e-5
1.0e-5 2.732144717815532e-6
1.0e-6 6.146316238971239e-7
1.0e-7 1.4298053954169988e-7
1.0e-8 3.4385814272678773e-8
⋮ ⋮
c L?
But in fact the limit is quite different from \(0\):
limit(f(x), x => 0, dir="+")
Symbolic math can still be thwarted by inadvertent conversion to floating point values. Consider this limit:
@syms x
limit(cos(x) / (pi/2 - x), x => pi/2)
The astute student will see this as related to the limit of \(\sin(x)/x\) at \(0\) which is of course \(1\). This is due to pi/2
being converted to floating point. This can be avoided by using PI
, from SymPy
:
@syms x
limit(cos(x) / (PI/2 - x), x => PI/2)
PI
is a symbolic variable to represent \(\pi\); E
a symbolic variable to represent \(e\); and oo
a symbolic variable to represent \(\infty\).
Another example of inadvertent conversion might be when an exact rational number is needed but a floating point value is used (e.g., 1/3
instead of 1//3
, which promotes to an exact value.)
Consider the limit
\[ L = \lim_{x \rightarrow 0} \frac{b^x - 1}{x}. \]
It’s answer depends on the value of \(b\). How would we approach this with SymPy
? The interface described above where functions are used is not the only one SymPy
knows of, and indeed is not typical of how one works with SymPy
. Typically, symbolic values are defined and then symbolic expressions are used.
Here is how we define a two symbolic values
@syms x b
(x, b)
We can take a limit as before, but here we see why it is important to indicate a variable in the \(x \rightarrow c\) part of the limit (x => 0
).
limit((b^x - 1) / x, x=>0)
For this limit a \(\log(b)\) value “magically” appears.
What value is symbolically computed for
\[ \lim_{x \rightarrow 0} \frac{1 - \cos(x)}{x^2}? \]
What value is symbolically computed for
\[ \lim_{x \rightarrow 0+} \frac{x^x - 1}{x}? \]
What value is symbolically computed for
\[ \lim_{h \rightarrow 0} \frac{\ln(1 + h)}{h}? \]
What value is symbolically computed for
\[ \lim_{x \rightarrow 0+} \sqrt{\sqrt{x} + x^3} \cdot \cos(\pi/x)? \]
Julia
when generated
Julia
version:
VERSION
v"1.10.4"
Packages and versions:
using Pkg
Pkg.status()
Status `~/work/mth229.github.io/mth229.github.io/Project.toml`
[a2e0e22d] CalculusWithJulia v0.2.6
[7073ff75] IJulia v1.25.0
[b964fa9f] LaTeXStrings v1.3.1
[ebaf19f5] MTH229 v0.3.2
[91a5bcdd] Plots v1.40.4
[f27b6e38] Polynomials v4.0.11
[438e738f] PyCall v1.96.4
[612c44de] QuizQuestions v0.3.23
[295af30f] Revise v3.5.14
[f2b01f46] Roots v2.1.5
[24249f21] SymPy v2.1.1
[56ddb016] Logging