• 検索結果がありません。

Motivation Roots of Equations Direct Search, Bisection Methods Regula Falsi, Secant Methods Newton-Raphson Method Zeros of Polynomials (Horner s, Mull

N/A
N/A
Protected

Academic year: 2021

シェア "Motivation Roots of Equations Direct Search, Bisection Methods Regula Falsi, Secant Methods Newton-Raphson Method Zeros of Polynomials (Horner s, Mull"

Copied!
10
0
0

読み込み中.... (全文を見る)

全文

(1)

Roots of Equations

 Direct Search, Bisection Methods

 Regula Falsi, Secant Methods

 Newton-Raphson Method

 Zeros of Polynomials (Horner’s, Muller’s methods)

 EigenValue Analysis

ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

Motivation

 Solution to many scientific and engineering problems

 Equations may be non-linear

 Equations expressible in the formF (x) = 0

 Values ofxthat satisfy the above equation are termed“roots”

 Example:Quadratic Equation ax2+ bx + c = 0 with roots x1 = −b + √ b2− 4ac 2a x2 = −b − √ b2− 4ac 2a

ITCS 4133/5133: Introduction to Numerical Methods 2 Roots of Equations

Example Functions

ITCS 4133/5133: Introduction to Numerical Methods 3 Roots of Equations

Direct Search Method

Trial and Error Approach: Evaluatef(x)over some range ofxat multiple points

 Specify a range within which the root is assumed to occur

 Subdivide range into smaller, uniformly spaced intervals

 Search through all subintervals to locate the root.

(2)

Direct Search:Searhing an interval

 Evaluate at end points of interval, and check iff(x) has opposite polarity.

ITCS 4133/5133: Introduction to Numerical Methods 5 Roots of Equations

Direct Search:Disadvantages

⇒ Computationally expensive (linear search)

⇒ Intervals have to be very small, for high precision

⇒ If intervals are too large, roots can be missed.

⇒ Multiple roots cannot be handled:

f(x) = x2− 2x + 1

ITCS 4133/5133: Introduction to Numerical Methods 6 Roots of Equations

Bisection Method

 Assumes a single root within the interval of interest.

 Similar to binary search: evaluates function at middle of interval and moves to interval containing the root.

 Requires fewer calculations than direct search method.

ITCS 4133/5133: Introduction to Numerical Methods 7 Roots of Equations

Bisection Method:Algorithm

(3)

Bisection Method:Example

ITCS 4133/5133: Introduction to Numerical Methods 9 Roots of Equations

Error Analysis

Error measures can be an absolute value,

d= |xm,i+1− xm,i| or, a percent error

r= xm,i+1− xm,i xm,i+1 100

True Accuracy is known only if true solution (rootxt) is known

r= xt− xm,i xt 100

ITCS 4133/5133: Introduction to Numerical Methods 10 Roots of Equations

Regula Falsi,Secant Methods

 Bisection makes no use of shape of function to determine root.

 Use a straightline approximation tof(x)(in contrast to the midpoint) to find the approximate root location.

 Start with 2 points,(a, f(a)), (b, f(b))that satisfies(f(a).f(b) < 0.

ITCS 4133/5133: Introduction to Numerical Methods 11 Roots of Equations

Regula Falsi Method

 Next approximation: where the straightline approximation crosses the x-axis: f(b) (b − s) = f(b) − f(a) (b − a) s = b − b − a f(b) − f(a)f(b)

(4)

Regular Falsi: Algorithm

ITCS 4133/5133: Introduction to Numerical Methods 13 Roots of Equations

Regular Falsi: Example(Cube Root of 2)

ITCS 4133/5133: Introduction to Numerical Methods 14 Roots of Equations

Secant Method

 Similar to Regula Falsi method, with a slight modification.

 New estimate ofxi+1of the root, based onf(xi)andf(xi−1)

 Especially useful when its difficult to obtain derivatives analytically.

x2= x1−xy1− x0 1− y0y1 and

xi+1= xi−xyi− xi−1 i− yi−1yi

ITCS 4133/5133: Introduction to Numerical Methods 15 Roots of Equations

Secant Method (contd)

f(xi−1)

xi+1− xi−1 =

f(xi)

xi+1− xi Solving forxi+1gives

xi+1 = xi−f(xf(xi)[xi−1− xi] i−1− f(xi) = xi−f(xf(xi−1−f(xi) i) xi−1−xi = xi−ff(x0(xi) i) which is similar in form to the Newton’s method.

⇒ Secant method doesnt require testing for interval selection

⇒ Generally converges faster, but not guaranteed.

(5)

Secant Method:Algorithm

ITCS 4133/5133: Introduction to Numerical Methods 17 Roots of Equations

Secant Method:Example(Square Root of 2)

ITCS 4133/5133: Introduction to Numerical Methods 18 Roots of Equations

Newton-Raphson Method

 Motivation: Rate of convergence of bisection method is slow.

 Newton’s iteration method uses thelinearportion of the Taylor series

f(x1) = f(x0) +dxdf∆x where∆x = x1− x0,dxdf is the derivative w.r.tx.

f(x1) = 0 = f(x0) +dxdf(x1− x0) whenx1is the root of the function.

x1= x0−df/dxf(x0) = x0−ff(x0(x0)

0)(derivativef

0evaluated atx 0)

ITCS 4133/5133: Introduction to Numerical Methods 19 Roots of Equations

Newton-Raphson Method (contd)

Iteratively,

xi+1= xi−ff(x0(xi) i)

Accuracy

 Iff(x)is linear, first iteration produces exact solution

 If f(x) is non-linear, accuracy depends on importance of the trun-cated non-linear terms of the Taylor series

Graphical Interpretation Approximatef0(x i)over[xi, xi+1]as f0(x i) = f(xx i− 0) i+1− xi from which the iteration is derived.

(6)

Newton-Raphson Method: Algorithm

ITCS 4133/5133: Introduction to Numerical Methods 21 Roots of Equations

Newton-Raphson Method: Example

ITCS 4133/5133: Introduction to Numerical Methods 22 Roots of Equations

Newton-Raphson Method:Non-Convergence

 f0(x)approaches zero, exception (division by zero).

 Solutionoscillatesbetween two different solutions

f(xi)/f0(xi) = −f(xi+1/f0(xi+1)

ITCS 4133/5133: Introduction to Numerical Methods 23 Roots of Equations

Solving Non-Linear Equations

 Addresses root finding to two or more variables

 Approach: Solve for the variables and aJacobistyle iteration tech-nique

Example

x3− 3x2+ xy = 0

4x2− 4xy2+ 3y2 = 0 Solve forxandy,

x = (3x2− xy)1/3

y = 4x24x+ 3y21/2

Use initial estimates for x and y to begin the iteration using most recent values ofxandy.

(7)

Muller’s Method

 Extension of theSecant Method; uses a quadratic approximation.

 Starts with 3 initial approximationsx0, x1, x2 and consider the next approximation x4 as the intersection of the parabola connecting

(x0, f(x0)), (x1, f(x1)), (x2, f(x2))with thex-axis

 In general, less sensitive to starting values than Newton’s method.

ITCS 4133/5133: Introduction to Numerical Methods 25 Roots of Equations

Muller’s Method(contd)

Start with

P (x) = a(x − x2)2+ b(x − x2) + c

As the quadratic passes through(x0, f(x0)),(x1, f(x1)and(x2, f(x2)

f(x0) = a(x0− x2)2+ b(x0− x2) + c

f(x1) = a(x1− x2)2+ b(x1− x2) + c

f(x2) = a.02+ b.0 + c = c We can solve fora, b, c

a = (x1− x2)[f(x(x0) − f(x2)] − (x0− x2)[f(x1) − f(x2)]

0− x2)(x1− x2)(x0− x1)

b = (x0− x2)2[f(x1) − f(x2)] − (x1− x2)2[f(x0) − f(x2)] (x0− x2)(x1− x2)(x0− x1)

c = f(x2)

ITCS 4133/5133: Introduction to Numerical Methods 26 Roots of Equations

Muller’s Method (contd.)

We can solve the quadraticP (x)for the next estimate,x3.

P (x) = P (x3− x2) = a(x3− x2)2+ b(x3− x2) + c = 0 resulting in x3− x2= −b ± √ b2− 4ac 2a

The above form can cause roundoff errors, when b2 ≫ 4ac. Instead use

x3− x2 = −2c

b ±√b2− 4ac

Muller’s method chooses the root, that agrees with the sign ofbto make the denominator large, and hence closest root tox2.

ITCS 4133/5133: Introduction to Numerical Methods 27 Roots of Equations

Eigenvalue Analysis

Eigen values, denoted byλ, are values for the matrix system

[A − λI]X = 0

having a non-zero solution vectorX.

⇒ AandIaren × n ⇒ Iis the identity matrix.

⇒ Applications: Analysis of sediment deposits in river beds, analyzing flutter in airplane wings.

Solve

|A − λI| = 0

resulting in annth order polynomial, to be solved for theλs.

(8)

Eigenvalue Analysis(contd.)

For a3 × 3system, |A − λI| = a11− λ a12 a13 a21 a22− λ a23 a31 a32 a33− λ = 0 resulting in λ3+ b 2λ2+ b1λ + b0= 0

ITCS 4133/5133: Introduction to Numerical Methods 29 Roots of Equations

Analyis: Root Finding Methods

 Need to understanderrors

 Need to understancdconvergence rates

Convergence Rate

 If|en+1|approachesK ∗ |en|pas=⇒ ∞,then the method is of orderp

 Typically convergence rates are valid only close to the root.

 Can study convergence rates experimentally or theoretically.

ITCS 4133/5133: Introduction to Numerical Methods 30 Roots of Equations

Convergence Rate: Fixed Point Iteration

xn+1= g(xn)

Ifx = ris a solution off(x) = 0, thenf(r) = 0, r = g(r). Thus,

xn+1− r = g(xn) − g(r) = g(x(xn) − g(r)

n− r) (xn− r)

We can use the mean value theorm (assumingg(x), g0(x)are continuous)

xn+1− r = g0(ξn) ∗ (xn− r) whereξn ∈ (xn, r). Similarly, the error can be defined as

ei+1= g0(ξn) ∗ ei

ITCS 4133/5133: Introduction to Numerical Methods 31 Roots of Equations

Convergence Rate: Fixed Point Iteration

|ei+1| = |g0(ξn)| ∗ |ei|

Suppose|g0(x)| < K < 1for all x ∈ (r − h, r + h), then ifx

0 is in this interval, then the iterates converge, as

|en+1| < K ∗ |en| < K2∗ |en−1| < K3|en−2| < .... < Kn+1∗ |e0| which impliesfixed point iteration is of order 1.

(9)

Convergence Rate: Newton’s Method

xn+1= xn−ff(x0(xn)

n) = g(xn)

◦ Has the same form as the fixed point iteration.

◦ Thus, successive iterates converge if|g0(x)| < 1 Assume a simple root atx = r,

g0(x) = 1 −f0(x) ∗ f0(x) − f(x) ∗ f00(x) [f0(x)]2 = f(x) ∗ f00(x) [f0(x)]2 Thus, if f(x) ∗ f00(x) [f0(x)]2 < 1

in an interval around the rootr, the method will converge for anyx0.

ITCS 4133/5133: Introduction to Numerical Methods 33 Roots of Equations

Convergence Rate: Newton’s Method

Ifris a root off(x) = 0, thenr = g(r), xn+1= g(xn), so

xn+1− r = g(xn) − g(r)

Expandingg(xn)using Taylor series, expanding about(xn− r),

g(xn) = g(r) + g0(r) ∗ (xn− r) +g 00(ξ) 2 (xn− r)2 withξ ∈ (xn, r) g0(r) =f(r) ∗ f00(r) [f0(r)]2 = 0 thus g(xn) = g(r) +g 00(ξ) 2 (xn− r)2

ITCS 4133/5133: Introduction to Numerical Methods 34 Roots of Equations

Convergence Rate: Newton’s Method

g(xn) = g(r) +g 00(ξ) 2 (xn− r)2 Ifxn− r = en, we have en+1= xn+1− r = g(xn) − g(r) = g 00(ξ) 2 en2

Thus, each error in the limit is proportional to the second power of the previous error, resulting convergence of order 2, or quadratic convergence.

ITCS 4133/5133: Introduction to Numerical Methods 35 Roots of Equations

Convergence Rate: Bisection Method

 Method based on intermediate valuetheorem: root exists in the interval

[a, b], iff(a)andf(b)are of opposite sign.

 Error at stagekis at most(b − a)/2k

 Error estimates(on the average) are reduced by half at each iteration

|en+1| = 0.5 ∗ |en|

 Bisection is linearly convergent.

(10)

Analysis: Regula Falsi, Secant Methods

 Rate of Convergence:Must investigate

|xk− x∗|

|xk−1− x∗|p for large values ofp.

 It can be shown that errors are reduced as follows:

en+1= g 00

i, ξ2)

2 (en)en−1  Convergence rate can be shown to be about 1.62.

参照

関連したドキュメント

Under certain assumptions on the sequence (P N ) N≥0 (which still allow for the standard probabilis- tic models of algorithms associated with binary search trees, digital search

In this paper, we have analyzed the semilocal convergence for a fifth-order iter- ative method in Banach spaces by using recurrence relations, giving the existence and

He thereby extended his method to the investigation of boundary value problems of couple-stress elasticity, thermoelasticity and other generalized models of an elastic

In addition to extending our existence proof there to the case of nonzero continuous drift (Theorem 1.6) and examining the effects of the order parameters 1 , 2 on e heat 1 , 2

Keywords: continuous time random walk, Brownian motion, collision time, skew Young tableaux, tandem queue.. AMS 2000 Subject Classification: Primary:

For performance comparison of PSO-based hybrid search algorithm, that is, PSO and noising-method-based local search, using proposed encoding/decoding technique with those reported

Variational iteration method is a powerful and efficient technique in finding exact and approximate solutions for one-dimensional fractional hyperbolic partial differential equations..

While conducting an experiment regarding fetal move- ments as a result of Pulsed Wave Doppler (PWD) ultrasound, [8] we encountered the severe artifacts in the acquired image2.