ROOT-FINDING USING JAVA

This is a detailed examination of four methods of discovering roots and the effects of different equations on the efficiency of each method and the amount of error produced by each method. Although some calculation here has been done by hand, the vast majority of our conclusions were arrived at by using computer programs for each of the methods. The four methods examined were bisection, false-position, secant, and Newton. Each method seemed to have a particular type of equation or desirable outcome for which it is most efficient.

Bisection Method

The bisection method arrives at the root by constraining the interval in which a root lies, and eventually making that interval very small. The user then has a small range on which the root lies. One of the best features of the bisection method is that it is possible to predict exactly how long it will take to find the root within any given degree of accuracy. This is because the bisection method halves the distance on the interval each time.

To find a root via the bisection method of y = f(x), one examines the interval [a,b] such that if f(a) < 0, then f(b) >0, or if f(a) > 0, then f(b) < 0. (Obviously, if f(a) = 0 or f(b) =0, you have been fortunate enough to stumble upon the root and do not need to proceed further.) After taking your initial interval, find the midpoint of that interval (1/2 (a+b)). Name that point m. f(m) will then have the same sign as either f(a) or f(b), and m will replace whichever point that is. The interval will either be [a,m] then or [m,b], depending on the direction of the function. One repeats this process until the interval is as small as or smaller than the acceptable error.

One interesting discovery that was made about the bisection method is that not every guess is closer to the root than the previous guess. In the equation from Table A, it was found that four out of the 17 guesses (about 25%) were actually further away from the root than the previous guess. Despite the fact that the interval in which there is a root is consistently getting smaller, nothing about bisection depends on the nature of the function itself. Although in general the guesses are approximating the root, because those guesses are nothing more than midpoint of intervals, there is no reason why we should expect that each guess is a better approximation than the previous guess. This, in fact, is what one will find.

False Position

Another method used for estimating roots of equations is the false position method. This method functions by first starting with two endpoints a and b. If f(a) and f(b) are not roots, then f(a) times f(b) is a negative value since they both need to lie on opposites sides of the root. This follows the logic that a negative value times a positive value equals a negative value. After the endpoints are chosen, false position finds the zero of the secant line joining (a,f(a)) and (b,f(b)) and then setting that as its new endpoint. This is done repetitively as the endpoints become continually closer to the root. An examination of the computer code for the false position method shows that it is very similar to the bisection method except that it uses the zero of the secant line to approach the root rather then finding the midpoint each time.

The false position method is often significantly faster then the bisection method for equations such as y = xn –2 for a small value of n. When n becomes large then false position often becomes the slowest method. In the data section of this paper, table B shows the number of repetitions for each method and value of n using the equation y = xn-2.

Secant

One of the faster methods, compared to bisection and false position, for approximating roots is the secant method. Unlike the other two methods, f(a) and f(b) of the two initial values that we pick to start with do not have be on opposites sides of the root and have a sign change. This one works by always making the old b value into the new a value. The new b value then becomes the zero of the secant line.

One of the problems with this method is that the slope of the secant line can become very small and this will cause it to move very far from the points you have. This method fails when estimating the roots of an equation, which has a very low slope near the root because the secant line will jump large amounts due to the low slope. Another problem is that this method like the false position method does not know when to stop. It must be performed several times until the f of the current guess is very small.

Newton’s Method

Newton’s method is generally used in calculus to approximate roots. One main difference between Newton and other methods is that it only requires one initial guess instead of two. Also, if the guess is relatively close to a root, Newton's method works incredibly fast. If the guess is not close to a root, however, Newton’s method can work slowly and, depending on the function, may be impossible to use.

One similarity between Newton’s method and the bisection method is that both approximate roots in a consistent fashion. This allows the user to predict exactly how long it will take to arrive at a root, assuming the function and the original guess are sufficiently "nice" for Newton’s method. Newton’s method doubles in precision with each step. This is why Newton requires the same number of steps in Table B, regardless of the value of n in the equation.

Newton’s method uses not only the function itself but also the derivative of that function to approximate the root. One chooses a point ‘a’. Throughout the course of the method, this will be replaced by the zero of the tangent line (the zero of the derivative). To find the new point, use the formula [a – (f(a))/(f’(a))]. This will be the new ‘a’, and one repeats this process until the root is found to sufficient precision.

Another interesting possibility with Newton’s method is error calculation.

Error calculations simply show the difference between the value arrived at by some method and the actual value. Notice below where R is the root and x is the result of the most recent Newton iteration.

Error = x - R

New Error = x- (f(x)/f'(x))-R ~ f(x) and f'(x) are expanded out to a power series.

With the Newton's method, we make the assumption that X is close enough to R so that the first nonzero term in the power series in numerator and denominator is almost the entire error. Finally, following this assumption, one can determine that the new error is about (x-R)/2, which means that the error is cut in half each time.

Data

Table A:

f(x) = x2 – 2 a = 1 b = 2 accuracy within 0.00001

Method Name

Number of steps

Bisection

17 steps

False – Position

8 steps

Secant

5 steps

Newton

4 steps

Table B:

For f(x) = xn – 2, a = 1, b = 2, n = {3, 4,…, 10} and accuracy within 0.0001

Number of steps

n = 3

n = 4

n = 5

n = 6

n = 7

n = 8

n = 9

N = 10

Bisection

17

17

17

17

17

17

17

17

False – Position

13

20

31

49

76

120

189

296

Secant

6

7

7

7

8

8

8

8

Newton’s

4

4

4

4

4

4

4

4

Table C:

For f(x) = x2 – 2, a = 1, b = 2, done for 25 iterations of each method, the error was:

Method:

Error:

Bisection

9.501 x 10-8

False-Position

-2.420323 x 10-8

Secant

-2.420323 x 10-8

Newton

-2.420323 x 10-8

For secant and Newton, the program was not able to complete 25 iterations. After 6 iterations, the memory overflowed.

Analysis of Data

The above data produces "nice" endpoints since the functions that were examined are also ideal. Even so, there was a lot of variation in the speed, accuracy, and reliability of each method. Table A shows the variation in speed between all four methods. It is clear that in this case Newton’s method would be a good choice for finding the root.

In Table B, one can see the consistency with which the bisection and Newton methods work for nice functions. This same consistency could be seen using the bisection method on any function, but Newton requires a relatively calm function in order to be helpful. One can also see that one method’s definition of a nice function is not the same as another method’s definition. Newton will work well on any non-oscillating, non-asymptotic function whose slope is not very close to zero. Nearly any polynomial fits this description. However, the false -position method’s effectiveness is inversely proportional to the degree of the polynomial. False-position’s definition of a nice function would be something more on the order of a low – degree function.

Table C shows the degree of accuracy, which can be achieved in 25 iterations of each method. The errors for all the methods are the same except for the bisection method. This once again emphasizes how relatively fast the other three are for functions they consider nice. The main problem encountered in this exercise had to do with the speed of these functions. Because they worked so quickly, within six steps secant and Newton’s methods had produced results, which were as accurate as the computer’s memory could hold. After six iterations, the computer would have a memory overflow. We corrected this by altering the number of iterations.

 

Conclusion

It was shown first-hand in this analysis that the speed of the method was inversely proportional to the consistency of the method to deal with non-nice functions. Newton’s method originally seemed to be an exception to this rule, because it was about five times as fast as bisection, but just as consistent for xn – 2 for all n. It was after trying some functions of different forms that one can find that the functions which are nice for Newton’s method are different than those for false-position and secant methods. These non-nice functions are easily found for Newton’s method, which makes Newton inconsistent and unable to efficiently solve for the root.

For nice functions and guesses relatively close to the root, each of the methods used require relatively few iterations before coming very close to that root. By examining the type of function, one can determine which sort of method would be most efficient in finding the root.



*Some contents courtesy of Emily Hutchins.