f(x) = sin(x)/x
= 0
c f(c)
NaN
A notebook for this material: ipynb
The concept of a limit is what makes calculus possible. Limits are used to define the two main concepts of calculus: derivatives and integrals. The formal definition of a limit is a bit difficult to appreciate and grasp. It involves an “
The limit of
as approaches is if for every there exists a such that whenever then .
When a limit exists, we write:
However the intuition of limits is more accessible. This intuition was known as early as the Greeks: Archimedes figured out the area under a parabola over 2000 years ago by the method of exhaustion, a limiting process. Fermat in 1629 essentially took a limit to find the slope of a tangent line to a polynomial curve. Newton in the late 1600s, exploited the idea in his development of calculus (as did Leibnez). Yet it wasn’t until the 1800s that Bolzano, Cauchy, and Weierstrass put the idea on a firm footing, as above.
To get the intuition behind a limit we replace the absolute value inequalities in the definition with “close” and read as follows: as
Before beginning, it should be noted that for most values of
For example, let
However, at
When discussing limits of combinations of continuous functions it is generally true that the limit is found by evaluating the function at
For this particular problem what does Julia
return when we try to evaluate
f(x) = sin(x)/x
= 0
c f(c)
NaN
The value NaN
arises when a floating-point computation is indeterminate.
So, does
NaN
values
Operations involving NaN
will also return NaN
. In this example, the ratio defining NaN
is returned by most of the indeterminate forms listed above, but not all. For example, 0^0
is 1
to Julia
:
“It proved more useful to adhere to the simply stated rule *anything raised to the
A graphical approach to this problem can show if a function is likely to have a limit. The basic idea is to simply make a graph around
using MTH229
using Plots
plotly()
Plots.PlotlyBackend()
f(x) = sin(x)/x
= 0, pi/2
c, delta plot(f, c - delta, c + delta)
From the graph it looks like the function value should be
Recall that the graph is made by choosing a large number of points between the limits and then evaluating the function at each of these points. The values NaN
, the line will break to indicate this. In the above graph, it appears that
Consider now the following limit
Noting that this is a ratio of continuous functions, we first check whether there is anything to do:
f(x) = (x^2 - 5x + 6) / (x^2 + x - 6)
= 2
c f(c)
NaN
The NaN
indicates that this function is indeterminate at
= 2, 1
c, delta plot(f, c - delta, c + delta)
The graph looks continuous. In fact, the value
As an aside, you can redefine f
using the “ternary operator”:
f(x) = x == 2.0 ? -0.2 : (x^2 - 5x + 6) / (x^2 + x - 6)
f (generic function with 1 method)
This particular case is a textbook example: one can easily factor
Written in this form, we clearly see that this is the same function as
By graphing near
Graphically look at the following limit
What is the value?
Graphically investigate the limit
What is the value of
Graphically investigate the limit
The limit exists, what is the value?
The following limit is commonly used:
Factoring out
What is
The following limit is commonly used:
The answer should depend on
Using the last result, what is the value of
The function
Let’s look at the function
f(x) = x == 0 ? NaN : x * sin(1/x)
= 0, 1/4
c, delta plot(f, c - delta, c + delta)
g(x) = abs(x); h(x) = - abs(x)
plot!(g)
plot!(h)
This graph clearly oscillates near
The highly oscillatory function
has a removable singularity at
Some functions do not have a limit. Make a graph of
Consider again the limit of
We can further illustrate how the function gets close to the limit of
f(x) = sin(x) / x
= 0, 1e-1
c, delta plot(f, c-delta, c+delta)
We see a similar picture, with an answer of
= 0, 1e-3
c, delta plot(f, c-delta, c+delta)
The graphs have a similar shape – but different scales. A closer look at the delta = 1e-1
(or delta = 1e-3
it is about
We can be more precise. The following function estimate the length of the range of the plotted values:
function epsilon(f, c, delta)
= range(c - delta, stop=c + delta, length=100)
xs = f.(xs) # like drawing a plot
ys = extrema(ys) # minimum and maximum
m, M - m
M end
epsilon(f, 0, 1e-1), epsilon(f, 0, 1e-3)) (
(0.0016656634810520154, 1.6664965329926673e-7)
Numerically we see as delta
gets small) epsilon
gets small).
In fact for this problem we can be a bit more precise, as one can infer that
We can empirically verify this for one value of
= 1e-2
delta epsilon(f, 0, delta) / (1/6*delta^2)
0.9998929696086734
The ratio is basically
Of course, using a comprehension we can do this comparison for different sized values of
= [1/10, 1/10^2, 1/10^3, 1/10^4, 1/10^5]
deltas epsilon(f, 0, delta)/(1/6*delta^2) for delta in deltas] [
5-element Vector{Float64}:
0.999398088631209
0.9998929696086734
0.9998979197956005
0.999897964426566
0.9999068240063025
This isn’t quite what needs to be shown in the proof of a limit: we are essentially finding an epsilon for a given delta rather than a delta for a given epsilon. It does suggest that were we to attempt a formal proof, we should look at
Consider the limit
Guess the relationship between epsilon
and delta
A table can be used to investigate limits of functions. The basic idea is that if
then for values of
which we know is simply
= [1/10, 1/100, 1/1000, 1/10000, 1/100000] # or [1/10^i for i in 1:5]
hs = 2 .+ hs xs
5-element Vector{Float64}:
2.1
2.01
2.001
2.0001
2.00001
The corresponding
f(x) = ((x+2)*(x-3)) / ((x+2)*(x+3))
= f.(xs) ys
5-element Vector{Float64}:
-0.17647058823529413
-0.1976047904191617
-0.19976004799040195
-0.19997600047999037
-0.19999760000480002
The ys
are clearly getting closer to
The pairs of values xs
and ys
can be more naturally displayed with a table, the square-bracket notation is useful here to put the values into two columns:
[xs ys]
5×2 Matrix{Float64}:
2.1 -0.176471
2.01 -0.197605
2.001 -0.19976
2.0001 -0.199976
2.00001 -0.199998
The above investigates the right limit, as the values chosen for xs
are always more than 2 but getting closer. The left limit might have used xs
defined with:
= [2 - 1/10, 2 - 1/100, 2 - 1/1000, 2 - 1/10000, 2 - 1/100000]
xs = f.(xs)
ys [xs ys]
5×2 Matrix{Float64}:
1.9 -0.22449
1.99 -0.202405
1.999 -0.20024
1.9999 -0.200024
1.99999 -0.200002
We see the same phenomenon –
The three steps above are bit tedious to type for each problem, so for convenience we encapsulate them into a function (available in the MTH229
package) call lim
:
lim(f, 2)
2.100000 -0.17647058823529413 2.010000 -0.1976047904191617 2.001000 -0.19976004799040195 2.000100 -0.19997600047999037 2.000010 -0.19999760000480002 2.000001 -0.19999976000004796 ⋮ ⋮ c L? ⋮ ⋮ 1.999999 -0.20000024000004799 1.999990 -0.20000240000480002 1.999900 -0.2000240004800096 1.999000 -0.20024004800960188 1.990000 -0.20240480961923846 1.900000 -0.22448979591836735
The lim
function has keywords n
and m
to adjust how close x
gets to c
. More importantly it has the keyword dir
to adjust the direction. The default above is used, which shows function values on the both the left and right. The numbers are always shown in decreasing order with c
in the middle, not typically shown unless there is rounding. The keyword argument dir="+"
only shows values with x > c
and for dir="-"
only values with x < c
are shown. (For `dir=“-” the table is read bottom to top.)
Now consider the limit of Julia
– following a standard – defines this function at
f(x) = x^x
= 0
c f(c)
1
And we see that the output from lim
agrees with an answer of
lim(f, c; dir="+")
0.100000 0.7943282347242815 0.010000 0.954992586021436 0.001000 0.9931160484209338 0.000100 0.9990793899844618 0.000010 0.9998848773724686 0.000001 0.9999861845848758 ⋮ ⋮ c L?
For our next example, we compute numerically (a tedious problem to do algebraically)
f(x) = (sqrt(x) - 5) / (sqrt(x-16) - 3)
= 25
c lim(f, c)
25.100000 0.6010616008415922 25.010000 0.6001066157341047 25.001000 0.6000106661569936 25.000100 0.6000010666430725 25.000010 0.6000001065281493 25.000001 0.6000000122568625 ⋮ ⋮ c L? ⋮ ⋮ 24.999999 0.5999999893418593 24.999990 0.5999998930988751 24.999900 0.5999989333095628 24.999000 0.5999893328251538 24.990000 0.5998932823395853 24.900000 0.5989282061387314
A quick investigation of the table demonstrates the limit should be
A very important limit in calculus is the derivative formula, written here to emphasize the secant line aspect:
Let’s take
f(x) = x^x
= 1;
c g(x) = (f(x) - f(c)) / (x - c)
lim(g, c)
1.100000 1.1053424105457568 1.010000 1.010050334174161 1.001000 1.00100050033347 1.000100 1.0001000049998368 1.000010 1.0000100000452363 1.000001 1.0000010000889006 ⋮ ⋮ c L? ⋮ ⋮ 0.999999 0.9999990000221217 0.999990 0.9999900000546837 0.999900 0.999900004999942 0.999000 0.9990004996667237 0.990000 0.9900496674924866 0.900000 0.9046742391703779
The left limit has a similar tale. We take this as strong evidence that the limit is
Find the limit as
Find the limit as
Find the limit as
Find the limit
Find the limit
There are several properties of limits that allow one to break down more complicated problems into smaller subproblems. For example,
is notation to indicate that one can take a limit of the sum of two function or take the limit of each first, then add and the answer will be unchanged, provided all the limits in question exist.
Use one or the either to find the limit of
Look at the figure of a sector of a circle of radius 1 and the subtended section.
Let
Does this function have a limit as
Is the form
Consider this limit:
In Julia
, log(k,x)
. The default, log(x)
takes f
, for a given k
, with
= 10 # say. Replace with actual value
k f(x) = x^(1/log(k, x))
f (generic function with 1 method)
Consider different values of
Next, consider this limit:
Consider different values of
The concept of a limit can be extended. For example, the concept of a limit as
Let’s compute
As can be seen by taking the isoceles triangle with angle
Thus the total length is
As
Let’s see.
= [10^i for i in 1:15]
n_to_infinity l(n) = n * 2sin( (2pi)/(2n) )
l(n) for n in n_to_infinity] [
15-element Vector{Float64}:
6.180339887498948
6.282151815625658
6.283174971759127
6.28318520382533
6.283185306146043
6.283185307169251
6.283185307179482
6.2831853071795845
6.283185307179586
6.283185307179585
6.283185307179586
6.283185307179586
6.283185307179586
6.283185307179587
6.283185307179586
To compare to
l(n)/(2pi) for n in n_to_infinity ] [
15-element Vector{Float64}:
0.983631643083466
0.9998355147105487
0.9999983550667451
0.9999999835506593
0.9999999998355065
0.9999999999983552
0.9999999999999835
0.9999999999999997
1.0
0.9999999999999999
1.0
1.0
1.0
1.0000000000000002
1.0
As the ratio has a limit of
There isn’t much difference to the above than what we did before, except we take increasing larger values for
Use an inscribed octagon to approximate
Archimedes used interior
L(n) = n * 2 * tan((2*pi) / (2*n))
L (generic function with 1 method)
What is the difference between
Jacob Bernoulli looked at the limit
What value did he find?
Euler looked at
Euler rewrote a series expansion for
basel(n) = sum( [1/i^2 for i in 1:n] )
basel (generic function with 1 method)
how big must pi^2/6 - basel(n) < 1e-3
?
The sum
euler_mascheroni(n) = sum([1/i for i in 1:n]) - log(n)
euler_mascheroni (generic function with 1 method)
Use this:
euler_mascheroni(i) for i in (10.0).^(1:7)] [
to find an answer to
A related limit to
Related in that they are used to approximate related functions near
plot([x -> 1 - cos(x), x -> x^2/2], -pi, pi)
Note in the figure how the parabola tracks the shape of the transformed cosine function very well near
Numerically, we have a different story. We see that there are limitations to our approach to finding limits that show up in analyzing this.
Here is a first attempt
f(x) = (1 - cos(x))/x^2
= 0
c lim(f, c; n=10, dir="+")
0.100000 0.49958347219741783 0.010000 0.4999958333473664 0.001000 0.49999995832550326 0.000100 0.4999999969612645 0.000010 0.5000000413701854 0.000001 0.5000444502911705 0.000000 0.4996003610813205 0.000000 0.0 0.000000 0.0 0.000000 0.0 ⋮ ⋮ c L?
We notice something odd – the values ultimately become
The culprit? Floating point approximation involves round off errors.
Let’s look at the two pieces. First the denominator:
top(x) = x^2
lim(top, c; n = 10, dir="+")
0.100000 0.010000000000000002 0.010000 0.0001 0.001000 1.0e-6 0.000100 1.0e-8 0.000010 1.0000000000000002e-10 0.000001 1.0e-12 0.000000 9.999999999999998e-15 0.000000 1.0000000000000001e-16 0.000000 1.0e-18 0.000000 1.0000000000000001e-20 ⋮ ⋮ c L?
There is nothing here to speak of, save for some rounding. Julia
’s Float64
type follows the IEEE 754 floating point standard. Of the 64 bits, 1 is used for the sign (plus or minus) and 11 are used to store the exponent. See this informative blog post for more Anatomy of a floating-point number. As 1e-1022
to 1e1023
. We aren’t even close to the lower range with 1e-20
.
Now, let’s look at the numerator. The issue is the difference between
bottom(x) = 1 - cos(x)
lim(bottom, c; n = 10, dir="+")
0.100000 0.0049958347219741794 0.010000 4.999958333473664e-5 0.001000 4.999999583255033e-7 0.000100 4.999999969612645e-9 0.000010 5.000000413701855e-11 0.000001 5.000444502911705e-13 0.000000 4.9960036108132044e-15 0.000000 0.0 0.000000 0.0 0.000000 0.0 ⋮ ⋮ c L?
Instead of giving a value that is roughly
In general, we add to our few rules of thumb for computation with floating-point numbers:
If we subtract two like-sized quantities our answer may have dramatically reduced precision.
In this specific case by the time we get to 5e-17
. However, in floating point representation there are fundamental limits to how close different things can be. Of the 64 bits representing a number, 52 are used for the precision. (A number,
The value
eps()
2.220446049250313e-16
measures how much larger the next representable number after
What has happened with
So be careful, we can get too close when looking “close.”
Julia
has some functions for working with floating point numbers. Some of you might be thinking that since eps
is the difference to the next representable number larger than 1, what is the same for the next representable number less than one. The prevfloat
value gives this. Here we see the issue between
prevfloat(1.0) < cos(1e-7), prevfloat(1.0) < cos(1e-8)
(false, true)
Floating point approximations differ depending on the location. Look at the difference between 1.0- prevfloat(1.0)
and nextfloat(1.0) - 1
. Then look at how small nextfloat(0.0)
is.
is eps() == nextfloat(1.0)-1
?
The bitstring
function prints the bit representation of a number. For real numbers, the first bit is the sign, the next 11 the exponent and the last 52 are for the precision. Let’s look at the values for a few:
bitstring(cos(1e-7))
bitstring(cos(1e-8))
bitstring(1.0)
"0011111111110000000000000000000000000000000000000000000000000000"
We see here how two different real numbers have the same floating point representation.
For fun, what is the difference between bitstring(-1.0)
and bitstring(1.0)
?
What is the difference between bitstring(NaN)
and bitstring(Inf)
? (These two are coded specially in floating point.)
The SymPy
package for Julia
provides a means to for Julia
users to interact with the SymPy
add-on for the Python
programming language. The SymPy package provides symbolic math features. One such feature is the ability to perform symbolically the limit of
This package is loaded by the MTH229
package.
The limit
function accesses these features. Its basic use is straightforward, just pass a symbolic expression, and indicate the variable going to c
. Symbolic expressions can be created by evaluating a function on a symbolic variable, the latter are created with the @syms
macro:
@syms x
f(x) = sin(x)/x
= 0
c limit(f(x), x=>c)
A limit at infinity can be computed using oo
for
f(x) = (1 + 1/x)^x
= oo # oo is symbolic infinity. Can also use Inf.
c limit(f(x), x=>c)
The latter shows the results are not quite machine numbers. Rather, they are symbolic values. We aren’t discussing these here, but the values they represent are readily apparent.
The command @syms x
creates x
as a symbolic variable. The call f(x)
returns a symbolic expression. Symbolic expressions can also be created directly, as with sin(x)/x
.
The limit
function has one named argument, dir
, used to adjust if a left, right (the default) limit is sought. For example, this function has different left and right limits at 0:
f(x) = sign(x)
@syms x
limit(f(x), x=>0, dir="-"), limit(f(x), x=>0, dir="+")
(-1, 1)
The algorithm implemented in SymPy
for symbolic limits is quite powerful. It does not suffer from the floating point issues described before and gives exact values (though some coercion to floating point is possible). The following example shows this:
This function is pretty devious:
f(x) = 1/ x^(log(log(log(log(1/x)))) - 1)
f (generic function with 1 method)
It has a right limit at
lim(f, 0; m=2, n=8, dir="+") # need to get close to 0 to start
0.010000 0.0001910870074860089
0.001000 5.6027494777652786e-5
0.000100 1.2464663061530677e-5
0.000010 2.732144717815532e-6
0.000001 6.146316238971239e-7
0.000000 1.4298053954169988e-7
0.000000 3.4385814272678773e-8
⋮ ⋮
c L?
But in fact the limit is quite different from
limit(f(x), x => 0, dir="+")
Symbolic math can still be thwarted by inadvertent conversion to floating point values. Consider this limit:
@syms x
limit(cos(x) / (pi/2 - x), x => pi/2)
The astute student will see this as related to the limit of pi/2
being converted to floating point. This can be avoided by using PI
, from SymPy
:
@syms x
limit(cos(x) / (PI/2 - x), x => PI/2)
PI
is a symbolic variable to represent E
a symbolic variable to represent oo
a symbolic variable to represent
Another example of inadvertent conversion might be when an exact rational number is needed but a floating point value is used (e.g., 1/3
instead of 1//3
, which promotes to an exact value.)
Consider the limit
It’s answer depends on the value of SymPy
? The interface described above where functions are used is not the only one SymPy
knows of, and indeed is not typical of how one works with SymPy
. Typically, symbolic values are defined and then symbolic expressions are used.
Here is how we define a two symbolic values
@syms x b
(x, b)
We can take a limit as before, but here we see why it is important to indicate a variable in the x => 0
).
limit((b^x - 1) / x, x=>0)
For this limit a
What value is symbolically computed for
What value is symbolically computed for
What value is symbolically computed for
What value is symbolically computed for
Julia
when generated
Julia
version:
VERSION
v"1.11.4"
Packages and versions:
using Pkg
Pkg.status()
Status `~/work/mth229.github.io/mth229.github.io/Project.toml`
[a2e0e22d] CalculusWithJulia v0.2.8
[7073ff75] IJulia v1.26.0
[b964fa9f] LaTeXStrings v1.4.0
[ebaf19f5] MTH229 v0.3.2
[91a5bcdd] Plots v1.40.11
[f27b6e38] Polynomials v4.0.19
[438e738f] PyCall v1.96.4
[612c44de] QuizQuestions v0.3.26
[295af30f] Revise v3.7.3
[f2b01f46] Roots v2.2.6
[24249f21] SymPy v2.3.3
[56ddb016] Logging v1.11.0