```
using MTH229
using Plots
plotly()
```

`Plots.PlotlyBackend()`

A notebook for this material: ipynb

A *zero* of the function \(f\) is a value \(x\) with \(f(x) = 0\).

Solving for zero of a function is a mathematical skill taught early on. In some cases, such as with linear equations, solving for zeros can be done directly using algebra. Similarly, in the case of factorable polynomials, we are taught to factor and then set each term to 0 to find the possible solutions, utilizing the fact that for the real numbers the product of two numbers is \(0\) only if one or both of the numbers is as well.

However, in general, the problem of finding one (or all) solutions to the equation

\[ f(x) = 0. \]

for an arbitrary \(f\) has no well-defined process.

A related problem is to find one (or all) solutions to an equation of this type:

\[ f(x) = g(x) \]

Conceptually this is identical to the above, as we just set \(eqn(x) = f(x) - g(x)\) and solve for when \(eqn(x)\) is \(0\).

Here we discuss a few different elementary means to do find zeros with `Julia`

, leaving some others for a later time.

We will use the add-on package `Roots`

which provides implementations of a few zero- and root-finding algorithms. This package is loaded when the `MTH229`

package is loaded.

```
using MTH229
using Plots
plotly()
```

`Plots.PlotlyBackend()`

Univariate polynomials are algebraic expressions involving an indeterminate. Polynomial functions are functions whose body evaluates a polynomial expression. These are special functions, in that their relatively simple form allows for many explicit things to be known. A famous example is the **quadratic formula** which for polynomials of degree 2 gives an explicit formula for the *roots*:

\[ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}. \]

Alert

A “root” of a polynomial is just a polynomial-specific name for a zero of the polynomial viewed as a function.

For example, if we have the quadratic polynomial \(2x^2 + 3x - 2\) we can solve for the roots with:

```
= 2; b = 3; c = -2
a = b^2 - 4*a*c
discr -b + sqrt(discr))/(2a), (-b - sqrt(discr))/(2a) (
```

`(0.5, -2.0)`

If you wanted to write a function to do this, it would be straightforward, save the detail of needing to make a negative number complex in order to take its square root:

```
## find roots of ax^2 + bx + c
function quadratic(a, b, c)
= b^2 - 4*a*c
discr = (discr > 0) ? sqrt(discr) : sqrt(discr + 0im)
sq
-b - sq)/(2a), (-b + sq)/(2a)]
[(end
```

`quadratic (generic function with 1 method)`

Alert

This is an example where the function is not *type-stable* as it returns either real-valued answers or complex-valued answers depending on the *values* of the input variables. In general, `Julia`

functions avoid this behaviour, as it leads to less performant code.

To find the roots of \(x^2 + x - 1\) we could simply use:

`quadratic(1, 1, -1)`

```
2-element Vector{Float64}:
-1.618033988749895
0.6180339887498949
```

There are also such formula for third and fourth degree polynomials. The Abel-Ruffini theorem states there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Galois – at the tender age of 20 – demonstrated that, in general, there can be no such formula for the roots of a fifth or higher degree polynomial in a manner showing that \(x^5-x-1\) can be be solved in this manner. There are still *many* facts known about such polynomials. For example, the *Fundamental theorem of algebra* states that every real-valued polynomial of degree \(n\) will have \(n\) roots, where we count complex roots and multiplicities.

More generally, the equation \(f(x) = 0\) may not have any special form leading to a known solution. In this case, there are various techniques to find zeros. Here we mention graphing, such as is done with a graphing calculator. In the next section, we discuss the bisection algorithm for root finding.

The flight of an arrow can be modeled using various functions, depending on assumptions. Suppose an arrow is launched in the air from a height of 0 feet above the ground at an angle of \(\theta = \pi/4\). With a suitable choice for the initial velocity, a model without wind resistance for the height of the arrow at a distance \(x\) units away may be:

\[ j(x) = \tan(\theta) x - (1/2) \cdot g(\frac{x}{v_0 \cos\theta})^2. \]

In `Julia`

we have, taking \(v_0=200\):

`j(x; theta=pi/4, g=32, v0=200) = tan(theta)*x - (1/2)*g*(x/(v0*cos(theta)))^2`

`j (generic function with 1 method)`

With a velocity-dependent wind resistance given by \(\gamma\), again with some units, a similar equation can be constructed. It takes a different form:

\[ y(x) = (\frac{g}{\gamma v_0 \cos(\theta)} + \tan(\theta)) \cdot x + \frac{g}{\gamma^2}\log(\frac{v_0\cos(\theta) - \gamma x}{v_0\cos(\theta)}) \]

Again, \(v_0\) is the initial velocity and is taken to be \(200\) and \(\gamma\) a resistance, which we take to be \(1\). With this, we have the following `Julia`

definition (with a slight reworking of \(\gamma\)):

```
function y(x; theta=pi/4, g=32, v0=200, gamma=1)
= gamma * v0 * cos(theta)
a /a + tan(theta)) * x + g/gamma^2 * log((a-gamma^2 * x)/a)
(gend
```

`y (generic function with 1 method)`

For each model, we wish to find the value of \(x\) after launching where the height is modeled to be 0. That is how far will the arrow travel before touching the ground?

For the model without wind resistance, we can graph the function easily enough. Let’s guess the distance is no more than 500 feet:

`plot(j, 0, 500)`

Well, we haven’t even seen the peak yet. Plotting over a wider interval will show that \(1250\) is the largest root. So we plot over this domain to visualize the flight:

`plot(j, 0, 1250)`

As for the model with wind resistance, a quick plot over the same interval, \([0, 1250]\) yields:

`plot(y, 0, 1250)`

Oh, “Domain Error.” Of course, when the argument to the logarithm is negative we will have issues.

We solve for when \(a-\gamma^2 x\) is \(0\):

```
= 1
gamma = 200 * cos(pi/4)
a = a/gamma^2 b
```

`141.4213562373095`

We try on the reduced interval avoiding the obvious *asymptote* at `b`

by subtracting \(1\):

`plot(y, 0, b - 1)`

Now we can see the zero is around 140. We re-plot:

`plot(y, 135, 141)`

```
plot(y, 140, 141)
plot!(zero)
```

The answer is approximately \(140.7\)

Finally, we plot both graphs at once to see that it was a very windy day indeed.

```
= 140.7
b plot(j , 0, 1250)
plot!(y, 0, b)
```

Sometimes, the equation \(f(x)=0\) is actually presented as \(f(x)= g(x)\). This form can be rewritten as \(eqn(x) = f(x) - g(x) = 0\), or if working graphically we can just look for crossing points of the graphs of \(f(x)\) and \(g(x)\). Here we shall do that.

We wish to compare two trash collection plans

- Plan 1: You pay 47.49 plus 0.77 per bag.
- Plan 2: You pay 30.00 plus 2.00 per bag.

There are some cases where plan 1 is cheaper and some where plan 2 is. Categorize them.

Both plans are *linear models* and may be expressed in *slope-intercept* form:

```
plan1(x) = 47.49 + 0.77x
plan2(x) = 30.00 + 2.00x
```

`plan2 (generic function with 1 method)`

Assuming this is a realistic problem and an average American household might produce 10-20 bags of trash a month (yes, that seems too much!) we plot in that range:

```
plot(plan1, 10, 20)
plot!(plan2)
```

We can see the intersection point is around 14 and that if a family generates between 0-14 bags of trash per month that plan 2 would be cheaper, otherwise they should opt for plan 1.

Graphically estimate the one zero of \(f(x) = e^x - x^3\) over the interval \([0,4]\).

Solving equations of the type \(f(x)=g(x)\) for \(x\) can also be done graphically. One method is to plot *both* functions and look for crossing points. Use this approach to graphically estimate all solutions to \(\cos(x) = x^2\) over the interval \((-\pi/2, \pi/2)\). What are they?

In an analysis of rainbows, Airy developed a special function implemented as `airyai`

in `Julia`

’s `SpecialFunctions`

package, which is loaded with the `MTH229`

package. The zeros of this function are all negative. The first one is between \(-3\) and \(-1\). Find it graphically.

The polynomial \(f(x) = x^5 - 6x^3 - 6x^2 -7x - 6\) has three real roots. Which of the following values is not one of them? Try to solve this graphically.

Let \(y(x)\) be defined as above to model the flight of an arrow. If \(\gamma=1/2\) when will the arrow strike the ground after launch?

Let \(y(x)\) be defined as above to model the flight of an arrow. Suppose a hill is in the path of the arrow. The hill is given by this function:

`hill(x) = x > 100 ? 2.0 * (x-100) : 0.0`

`hill (generic function with 1 method)`

By solving `y(x) = hill(x)`

solve for how far the arrow will fly before hitting the hill.

The last example had us graphically “zoom” in on a zero, and led us to an estimate to \(1\) or \(2\) decimal points. Trying to get more accuracy than that graphically is at best tedious. Here we discuss a method to get as much accuracy as is numerically possible based on the intermediate value theorem:

**The intermediate value theorem**: If \(f(x)\) is a continuous function on \([a,b]\) then at some point in the interval \(f(x)\) takes on any value between \(f(a)\) and \(f(b)\).

In particular if \(f(x)\) is continuous with \(f(a)\) and \(f(b)\) having *different* signs then there must be a point \(c\) in \([a,b]\) where \(f(c) = 0\). (When \(f(a)\) and \(f(b)\) have different signs, we say \(a\) and \(b\) *bracket* a root.) This observation is due to Bolzano.

The bisection algorithm finds a zero *between* two bracketing value, as illustrated in this graphic:

```
plot(sin, 1, 5)
plot!(zero)
vline!([3,4])
```

The zero of the sine function at \(\pi\) is clearly *between* \(3\) and \(4\). The bisection algorithm is a method to refine the bracketing interval so that it is as small as possible.

The bisection algorithm utilizes Bolzano’s observation. It is a simple *iterative* procedure for finding such a value \(c\) when we have a continuous function and a bracketing interval.

Alert

The bisection method does not work when the function does not **cross** the \(x\) axis at the root. For example, the zero at \(0\) of \(f(x) = x^2 e^x\) would not be found with this method. This is why a *bracketing* interval is needed to start the algorithm.

Mathematically the basic idea is simple.

Starting with \([a,b]\), the midpoint \(M = (a + b)/2\), is tested for its function value. If \(f(M) = 0\), great, we are done. If it has opposite sign of \(f(a)\), then a root must be in the interval \([a,M]\), so the problem is reduced a smaller interval. Otherwise, it has opposite sign of \(f(b)\) and the problem is reduced to \([M,b]\). Either way, the algorithm is repeated for the smaller interval where a root is known. As each step halves the interval length, it must eventually converge to an answer.

Graphically, we could do this. For example, Let’s consider \(f(x) = x^2 - 2\) with the bracketing interval \([1,2]\). We first plot:

```
f(x) = x^2 - 2
= 1, 2
a,b plot(f, 0, 3; legend=false)
plot!(zero)
vline!([a, (a+b)/2, b]) # add vertical lines at a, b and midway
```

We can see that \(c = (a + b)/2 = 3/2\) will have \(f(c) > 0\), so the new bracket is \([a,c]\):

```
= a, (a + b)/2
a, b
plot(f, a, b; legend=false)
plot!(zero)
vline!([(a+b)/2])
```

Now the function value at the midpoint is negative, so we modify `a`

:

```
= (a + b)/2, b
a, b
plot(f, a, b; legend=false)
plot!(zero)
vline!([(a+b)/2])
```

And again, this has a function value at the midpoint in the negative territory so again we modify `a`

:

```
= (a + b)/2, b
a, b
plot(f, a, b; legend=false)
plot!(zero)
vline!([(a+b)/2])
```

And now, as the midpoint is in positive territory we would modify \(b\) …

This gets tedious to do graphically. But it can be easily programmed. The main step might look something like this:

```
f(x) = x^2 - 2
= 1, 2
a, b
= (a + b) /2
c
if f(a) * f(c) < 0
= a, c
a, b else
= c, b
a, b end
a, b
```

`(1, 1.5)`

Though some check if `f(c) == 0`

is also needed.

Here \(c=1.5\) and the new interval is \([1.0, 1.5]\), as we had graphically. We just need to repeat the above.

It seems as though we could be here all day. Indeed, if doing this by hand it might take up quite a bit of time. We should automate this. Before automating this, we need to think: *when would we stop?*

Mathematically we can keep taking halves using the concept of a limit. See for example Zeno’s paradox. On a computer we don’t have such a luxury. In fact, for floating point numbers we couldn’t keep taking halves – even if we wanted – as ultimately we should expect to get `a`

and `b`

being floating point values that are next to each other – and hence there is no midpoint. (Well, there are some cases that suggest a more careful numeric approach to the above.)

So even though this doesn’t make mathematical sense we can try stopping when the following condition is no longer true:

`< c < b a `

`false`

A `while`

loop is used to repeat the central step until the above (or some variant) is `false`

.

The `MTH229`

package has a `bisection`

method implemented for this which also outputs a graphical indication of the first few steps taken.

Okay, let’s look at the function \(f(x) = -16x^2 + 32x\). We know that \(0\) and \(2\) are roots. Let’s see if our algorithm finds them:

```
f(x) = -16x^2 + 32x
bisection(f, -1, 1) ## should find 0
```

```
a###############################################################b
... exact answer found ...
```

`0.0`

`bisection(f, 1, 3) ## should find 2`

```
a###############################################################b
... exact answer found ...
```

`2.0`

Okay, it seems to work. Lets try it on a less trivial problem. We know \(\sin(x)\) and \(\cos(x)\) cross in the interval \([0, \pi/2]\). If we are too tired to remember where, we can simply ask:

```
f(x) = cos(x) - sin(x)
= bisection(f, 0, pi/2) x
```

```
a###############################################################b
................................a###############################b
................................a###############b................
................................a#######b........................
................................a###b............................
................................a#b..............................
................................ab...............................
⋮⋮
```

`0.7853981633974483`

Is `x`

really a zero?

`f(x) x, `

`(0.7853981633974483, 1.1102230246251565e-16)`

Hmm, the answer is `1.1102230246251565e-16`

. So technically this is not a zero. But *computationally* it is a zero! First it should be clear that it is *really close* to zero. We will see it is as close as computationally possible to a zero.

The `nextfloat`

and `prevfloat`

functions find the floating point values just bigger than `x`

and just smaller. In this case we have that `f(x)`

and `f(nextfloat(x))`

have different signs:

`f(x), f(nextfloat(x))`

`(1.1102230246251565e-16, -1.1102230246251565e-16)`

So \(f\) is crossing \(0\) between the value we found, `x`

and, the floating point value just a bit bigger. We can’t realistically expect to get any closer than that, as there are no machine numbers in between these two.

Plot `airyai`

over `[-3,3]`

. Which of the following is a *bracketing* interval?

Let \(f(x) = e^x - x^4\) which of these intervals is a *bracketing interval* for a zero of \(f\)?

Let \(f(x) = 1/2 + \sin(3x) + \sin(7x)\). Which of these is a *bracketing interval$?

The bisection method, while easy to describe and understand, can be made a bit more efficient. The `find_zero`

function from the `Roots`

package does so. This package is loaded when `MTH229`

is. This function uses a tuple to specify the bracketing interval.

For example, to find a zero of \(f(x) = 2x \cdot \exp(-20) - 2 \cdot \exp(-20x) + 1\) in the *bracketing* interval \([0,1]\) we have:

```
f(x) = 2x * exp(-20) - 2 * exp(-20x) + 1
find_zero(f, (0, 1))
```

`0.03465735902085385`

The `find_zero`

function is actually an interface to various root-finding algorithms. When called as above – with two initial starting points – it uses a bracketing approach as discussed here, though with a different notion of the midpoint.

A slightly different interface is given by the `fzero`

function, where that above would be:

`fzero(f, 0, 1)`

`0.03465735902085385`

One needs to know where to look in order to use the bisection method. The basic “one-two punch” is:

- graph the function to identify bracketing values for which the zero is *between$ \(a\) and \(b\). These need not be very precise, indeed can even be infinite.
- use the bisection method to find the zero to many decimal points.

Here we illustrate with the problem of finding all intersection points of \(e^x = x^4\) over the interval \([0,10]\).

Recall, solving for \(f(x) = g(x)\) is identical to the problem of solving \(eqn(x) = 0\), where we define \(eqn(x) = f(x) - g(x)\). So our problem is to find solutions to \(e^x - x^4 = 0\).

A quick plot shows that the function has such a wide range that looking over the entire domain at once will be problematic:

```
f(x) = exp(x) - x^4
plot(f, 0, 10)
```

Instead, we look between \([0,3]\) and \([8,9]\). A quick confirmation shows these are good choices to use. For example, between \(8\) and \(9\) we have:

`plot(f, 8, 9)`

So we find the values of the zero in the bracketed region \([8,9]\):

`find_zero(f, (8, 9))`

`8.6131694564414`

The root within \([0, 3]\) is found with:

`find_zero(f, (0, 3))`

`1.4296118247255556`

There are two reasonable means to solve \(f(x) = c\) for \(x\) when \(c\) is non-zero. As above, we can define a new function \(eqn(x) = f(x) - c\). Then when \(eqn(x)\) is \(0\) we will have \(f(x)\) equal to c.

Alternatively, we can pass in \(c\) as a parameter by defining a two-argument version of \(f.\)

Suppose we aim to find when \(\tan(x) = c\) for \(c > 0\). For any \(c > 0\), say \(c = 1.23\), we can see from the shape of the graph of \(\tan(x)\) that there is one intersection point between \(0\) and \(\pi/2\).

The easiest way is to create a simple function:

```
eqn(x) = tan(x) - 1.23
find_zero(eqn, (0, pi/2))
```

`0.8881737743776796`

We can be more formal (as it is a more general pattern) and use two functions to define the problem:

```
f(x) = tan(x)
g(x) = 1.23
eqn(x) = f(x) - g(x)
find_zero(eqn, (0, pi/2))
```

`0.8881737743776796`

We compare with the arc tangent, as there is a built-in function to do this particular problem:

`atan(1.23)`

`0.8881737743776796`

An alternate approach is to define our function with a *parameter*:

`f(x, p) = tan(x) - p`

`f (generic function with 2 methods)`

We can then pass values for `p`

to `find_zero`

to `solve`

:^{1}

`find_zero(f, (0, pi/2); p=1.23)`

`0.8881737743776796`

For this problem, the two approaches are about equal in effort. The first style lends itself to more complicated expressions on both sides of the equals sign, the second is useful when more than one value for \(p\) is to be sought.

The `find_zero`

function finds a value solving \(f(x) = 0\). When the question involves two function, e.g. \(f(x) = g(x)\) it can’t be immediately used. However, forming an auxiliary function \(eqn(x) = f(x) - g(x)\) works as solutions to \(eqn(x) = 0\) satisfy \(f(x) = g(x)\).

For example, identifying when \(\cos(x) = 2x\) may be of interest. By simple geometry, the line through zero with positive slope will intersect the cosine somewhere in \([0, \pi/2]\):

We can do this with the following approach:

```
f(x) = cos(x)
g(x) = 2x
eqn(x) = f(x) - g(x)
find_zero(eqn, (0, pi/2))
```

`0.45018361129487355`

More generally, we might be interested in solving \(\cos(x) - px\) for different \(p > 0\). The parameter approach can be readily used:

```
f(x, p) = cos(x) - p * x
find_zero(f, (0, pi/2); p = 2)
```

`0.45018361129487355`

The `secant(f, a, b)`

function returns a *function* representing the secant line between \((a, f(a))\) and \((b, f(b))\). In the following we see where the secant line between \(1\) and \(2\) intersects that between \(-2\) and \(-1\) for \(f(x) = e^x\).

Graphically, we have

```
f(x) = exp(x)
plot(f, -2.1, 2.1; legend=false, ylims=(0, 10))
plot!(secant(f, 1, 2))
plot!(secant(f, -2, -1))
```

The intersection point near \(0.5\) is clear. In the following we show how an infinite bracketing interval is possible. As two non-parallel lines intersect in one point, any open interval containing the intersection point will be a bracketing interval. The bisection algorithm of `find_zero`

handles without fuss any signed value, even infinite ones (though does stop if it encounters a `NaN`

value).

```
f(x) = exp(x)
= secant(f, 1, 2)
sl_12 = secant(f, -2, -1)
sl_21 eqn(x) = sl_12(x) - sl_21(x)
find_zero(eqn, (-Inf, Inf))
```

`0.5752103826044416`

The above follows the pattern of using `eqn(x)= f(x) - g(x)`

to solve \(f(x) = g(x)\). The intermediate names, `sl12`

and `sl21`

, are used for the two function here that we are finding the intersection of.

In the bisection method algorithm we checked that the value of \(f\) at \(a\) and \(b\) had opposite signs by looking at \(f(a)\cdot f(b)\). Why did this work?

There is another root in the interval \([-10, 0]\) for the function \(f(x) = e^x - x^4\). Find its value numerically:

Let \(f(x) = x^2 - 10 \cdot x \cdot \log(x)\). This function has two zeros on the positive \(x\) axis. You are asked to find the largest (graph and bracket…):

The `airyai`

function has infinitely many negative roots, as the function oscillates when \(x < 0\). In a previous problem we graphically found the largest root. Now find the *second largest root* using the graph to bracket the answer, and then solving.

In 1638, according to Amir D. Aczel, an experiment was performed in the French Countryside. A monk, Marin Mersenne, launched a cannonball straight up into the air in an attempt to help Descartes prove facts about the rotation of the earth. Though the experiment was not successful, Mersenne later observed that the time for the cannonball to go up was greater than the time to come down. According to “Vertical Projection in a Resisting Medium: Reflections on Observations of Mersenne”.

This isn’t the case for simple ballistic motion where the time to go up is equal to the time to come down. We can “prove” this numerically. For simple ballistic motion, \(f(t) = -(1/2)\cdot 32 t^2 + v_0t\). The time to go up and down are found by the two zeros of this function. The peak time is related to a zero of a function given by `D(f)`

, which for now we’ll take as a mystery function, but later will be known as the derivative.

Let \(v_0= 390\). The three times in question can be found from the zeros of `f`

and `f'`

. What are they?

For simple ballistic motion you find that the time to go up is the time to come down. For motion within a resistant medium, such as air, this isn’t the case. Suppose a model for the height as a function of time is given by

\[ h(t) = (\frac{g}{\gamma^2} + \frac{v_0}{\gamma})(1 - e^{-\gamma t}) - \frac{gt}{\gamma} \]

(From “On the trajectories of projectiles depicted in early ballistic Woodcuts”)

Here \(g=32\), again we take \(v_0=390\), and \(\gamma\) is a drag coefficient that we will take to be \(1\). This is valid when \(h(t) \geq 0\). In `Julia`

, rather than hard-code the parameter values, for added flexibility we can pass them in as keyword arguments:

`h(t; g=32, v0=390, gamma=1) = (g/gamma^2 + v0/gamma)*(1 - exp(-gamma*t)) - g*t/gamma`

`h (generic function with 1 method)`

Now find the three times: \(t_0\), the starting time; \(t_a\), the time at the apex of the flight; and \(t_f\), the time the object returns to the ground.

For Polynomials of higher degree, there are some specific methods that can be used to identify the roots. We will demonstrate the methods from the `SymPy`

package. These work on symbolic expressions. These will be described in more detail later, but for now, we have to make a *symbolic* variable, `x`

to proceed:

`@syms x`

`(x,)`

The two functions we discuss are `sympy.roots`

and `sympy.real_roots`

.

First consider the quadratic equation below. We can identify the real roots of algebraic type with:

```
f(x) = x^2 + x - 1
real_roots(f(x)) sympy.
```

\(\left[\begin{smallmatrix}- \frac{\sqrt{5}}{2} - \frac{1}{2}\\- \frac{1}{2} + \frac{\sqrt{5}}{2}\end{smallmatrix}\right]\)

That was so easy, we’ll do it again. What are the roots of the polynomial \(f(x) = -16x^2 + 32x + 6\)?

```
f(x) = -16x^2 + 32x + 6
real_roots(f(x)) sympy.
```

\(\left[\begin{smallmatrix}1 - \frac{\sqrt{22}}{4}\\1 + \frac{\sqrt{22}}{4}\end{smallmatrix}\right]\)

As can be seen, \(f\) has two real roots. This next polynomial has none:

```
f(x) = x^2 + x + 1
real_roots(f(x)) sympy.
```

`Any[]`

These are examples of the general template **action(function_object, args…)** for performing some action on a function. In this case, the action is to find the roots of a function which specifies a polynomial function and the additional *args…* are not necessary–if only complex values are desired.

For some problems only the possible real roots are desired.

The following polynomial has both real roots and complex roots. The real one are

```
f(x) = (x^2 + x + 1) * (x^2 + x - 1)
real_roots(f(x)) sympy.
```

Compare to

`roots(f(x)) sympy.`

\[\begin{equation*}\begin{cases}- \frac{1}{2} - \frac{\sqrt{3} i}{2} & \text{=>} &1\\- \frac{1}{2} + \frac{\sqrt{5}}{2} & \text{=>} &1\\- \frac{\sqrt{5}}{2} - \frac{1}{2} & \text{=>} &1\\- \frac{1}{2} + \frac{\sqrt{3} i}{2} & \text{=>} &1\\\end{cases}\end{equation*}\]

The word “algebraic” was used, as some problems have answers, but not readily expressible ones. For example, `x^5 -x - 1`

:

```
f(x) = x^5 - x - 1
roots(f(x)) sympy.
```

`Dict{Any, Any}()`

However, the `solve`

function (which solves `f(x)=0`

) does hint at answers:

`solve(f(x))`

\(\left[\begin{smallmatrix}\operatorname{CRootOf} {\left(x^{5} - x - 1, 0\right)}\\\operatorname{CRootOf} {\left(x^{5} - x - 1, 1\right)}\\\operatorname{CRootOf} {\left(x^{5} - x - 1, 2\right)}\\\operatorname{CRootOf} {\left(x^{5} - x - 1, 3\right)}\\\operatorname{CRootOf} {\left(x^{5} - x - 1, 4\right)}\end{smallmatrix}\right]\)

These can be revealed, but converting them to numeric with `N`

:

`N.(solve(f(x)))`

```
5-element Vector{Number}:
1.1673039782614187
-0.7648844336005847 - 0.35247154603172626im
-0.7648844336005847 + 0.35247154603172626im
0.18123244446987538 - 1.0839541013177107im
0.18123244446987538 + 1.0839541013177107im
```

Find all roots of the function \(f(x) = x^4 - 4x^2 -4x + 2\). Are they all real numbers?

Find the largest real root of the polynomial \(x^2 + x - 5\)

Find the largest real root of the polynomial \(x^3 - x - 17\)

The rule of signs of Descartes is a simple means to give an upper bound on the number of positive real roots a polynomial has. One counts the number of sign changes amongst the polynomials coefficients. Suppose this is \(k\), then the number of *positive* real roots (counting multiplicities) is one of \(k\), \(k-2\), \(k-4\), … . In particular if \(k\) is odd, there must be at least one real root.

For example, the polynomial \(x^3 -x^2 -x - 1\) has signs `+ - - -`

, so there is just one sign change. This implies there must be exactly one *positive* real root, which is identifyied with:

```
f(x) = x^3 -x^2 -x - 1
N.(sympy.real_roots(f(x)))
```

```
1-element Vector{Float64}:
1.8392867552141612
```

For the polynomial \(f(x) = x^5 -x + 1\) has potentially 2 *positive*, real roots? Are there \(0\) or \(2\) positive, real roots?

The number of possible *negative*, real roots can also be found from Descartes’ rule. Instead of looking at the sign changes of \(f(x)\), one must look at the sign changes of \(g(x) = f(-x)\).

If \(f(x) = x^5 - x +1\) we have \(g(x) = -x^5 +x + 1\) (just change the signs of the coefficients of the odd powers). Then \(g(x)\) has one sign change. This means there is one *negative* real root. What is it?

Environment of

`Julia`

when generated
`Julia`

version:

`VERSION`

`v"1.10.4"`

Packages and versions:

```
using Pkg
Pkg.status()
```

```
Status `~/work/mth229.github.io/mth229.github.io/Project.toml`
[a2e0e22d] CalculusWithJulia v0.2.6
[7073ff75] IJulia v1.25.0
[b964fa9f] LaTeXStrings v1.3.1
[ebaf19f5] MTH229 v0.3.2
[91a5bcdd] Plots v1.40.4
[f27b6e38] Polynomials v4.0.11
[438e738f] PyCall v1.96.4
[612c44de] QuizQuestions v0.3.23
[295af30f] Revise v3.5.14
[f2b01f46] Roots v2.1.5
[24249f21] SymPy v2.1.1
[56ddb016] Logging
```

We use a keyword argument in the example code, the last positional argument can also be used, which is useful for broadcasting, were we interested in finding solutions for multiple values of the parameter.↩︎