• 検索結果がありません。

Numerical methods for hypersingular integrals on the real line

N/A
N/A
Protected

Academic year: 2022

シェア "Numerical methods for hypersingular integrals on the real line"

Copied!
21
0
0

読み込み中.... (全文を見る)

全文

(1)

Numerical methods for hypersingular integrals on the real line

Maria Carmela De Bonisa·Donatella Occorsiob

Communicated by G. Milovanovi´c

Abstract

In the present paper the authors propose two numerical methods to approximate Hadamard transforms of the type

Hp(f wβ,t) = Z

=

R

f(x)

(xt)p+1wβ(x)d x,

wherepis a nonnegative integer andwβ(x) =e−|x|β,β >1, is a Freud weight. One of the procedures employed here is based on a simple tool like the “truncated” Gaussian rule conveniently modified to remove numerical cancellation and overflow phenomena. The second approach is a process of simultaneous approximation of the functions{Hk(f wβ,t)}pk=0. This strategy can be useful in the numerical treatment of hypersingular integral equations. The methods are shown to be numerically stable and convergent and some error estimates in suitable Zygmund-type spaces are proved. Numerical tests confirming the theoretical estimates are given. Comparisons of our methods among them and with other ones available in literature are shown.

1 Introduction

In the present paper we propose some global strategies to approximate finite part integrals (shortly FP integrals) Z

=

R

f(x)

(xt)p+1wβ(x)d x, t∈R,

wherepis a nonnegative integer,wβ(x) =e−|x|β,β >1, is a Freud weight and the density function f is assumed in such a way that the integral exists in the Hadamard sense.

Integrals of this kind are of interest, for instance, in the solution of hypersingular BIE, which model many different Physical and Engineering problems (see[26]and the references therein,[8],[11],[18],[1],[31]).

The numerical treatment of FP integrals over finite ranges has been extensively treated[25],[16]and limiting ourselves to interpolatory rules based on the zeros of orthogonal polynomials, we cite among them[14],[29],[24]whenever the integrand contains Jacobi weights. For a comparison among different numerical approaches in[−1, 1]see also[32]. On the other hand the use of the tools introduced for finite ranges to integrals over infinite ranges can be inadequate. For instance, non linear transformation can produce slower convergent results, since the density function can be significantly worsened. Moreover, the analysis of the stability of the rules and the estimates of the errors in different functional spaces can present some additional difficulties. For instance, the appealing Gaussian rule looses its high performance as well as product integration rules based on the zeros of orthogonal polynomials, and by a truncation strategy introduced in[19](see also[7],[22]) the problem can be successfully overcome.

An additional hindrance depends on the weightwβ, since the coefficients of the three term recurrence relation of the corresponding orthogonal polynomials are unknown in the general case. However, it is possible to calculate them with high accuracy in extended arithmetic usingWolfram Mathematicaroutines[2](details are given in Section 5).

aDipartimento di Matematica ed Informatica, Università degli Studi della Basilicata, Viale dell’Ateneo Lucano 10, 85100 Potenza, Italy, GNCS member, email mariacarmela.debonis@unibas.it.

bDipartimento di Matematica ed Informatica, Università degli Studi della Basilicata, Viale dell’Ateneo Lucano 10, 85100 Potenza, Italy, GNCS member, email donatella.occorsio@unibas.it.

(2)

Following a very standard way, we start from the decomposition Hp(f wβ,t) :=

Z

=

R

f(x)

(xt)p+1wβ(x)d x

= Z

R

f(x)−Pp k=0

f(k)(t) k! (xt)k

(xt)p+1 wβ(x)d x+

p

X

k=0

f(k)(t) k!

Z

=

R

wβ(x) (xt)p+1kd x

=: Fp(f,t) +

p

X

k=0

f(k)(t)

k! Hp−k(wβ,t)

= 1

p!

dp

d tpF0(f,t) +

p

X

k=0

f(k)(t)

k! Hp−k(wβ,t), (1)

where

F0(f,t) = Z

R

f(x)−f(t)

xt wβ(x)d x, (2)

focusing the attention on the first right-hand integral, since the remaining ones can be efficiently computed by standard routines for many choices ofβ(see Section 5). In the first method, for any fixedt, we select a suitable subsequence of Gaussian rules based on Freud zeros, which leads to a stable and fast convergent procedure. By this way, the severe numerical cancellation arising whentis “close” to a Gaussian node is avoided, without changes of variables which double the samples of f and depend on the singularityt (see for instance[15],[25]). In addition, since we use “truncated” Gaussian rules[20], the number of function evaluations is drastically reduced and possible overflow phenomena are overcome. To complete the framework, fort

“large” (large in the sense we specify later) andf “sufficiently” smooth, we propose a shrewd use of the Gauss-Freud rule.

Since in some contexts the approximation of{Hk(f wβ,t)}pk=0for a “large” number oftand/or the uniform convergence of the approximation process are required, following an idea in[5](see also[28],[21],[24],[4]), we approximate{F(k)0 (f)}pk=0by means of the successive derivatives of a suitable Lagrange polynomial interpolatingF0(f)at Freud zeros. Since in the general case the samples ofF0(f)at the interpolation knots cannot be exactly computed, we approximate them by a “truncated” Gauss-Freud rule (see[20]). For a correct error estimate in weighted uniform spaces, we determine the class ofF0(f)depending on the Zygmund-type space f belongs to.

Moreover, whenever the derivatives{f(k)(t)}pk=1are not available, this method can be performed avoiding the differentiation of the density function f. Indeed, in this case, we propose to approximate f(k)(t)by the derivatives of a suitable Lagrange polynomial interpolating f, without requiring additional samples of the density functionf.

The paper is organized as follows. In Section 2 some notations and basic results needed to introduce the main results are collected. In Sections 3 and 4 we describe the numerical methods proposed to approximateHp(f wβ,t), getting some results about the stability and the rate of convergence. Section 5 contains some computational details useful in the implementation process and in Section 6 we present some numerical tests, by comparing both the procedures among them and with other methods available on the same topic. As we will show, the numerical results agree with the theoretical analysis and confirm the efficiency of the proposed procedures. Finally, in Section 7 we give the proofs of the main results.

2 Notations and preliminary results

In the sequelCwill denote any positive constant which can be different in different formulas. MoreoverC6=C(a,b, ..)will be used to say that the constantCis independent ofa,b, .... The notationAB, whereAandBare positive quantities depending on some parameters, will be used if and only if(A/B)±1≤C, withCpositive constant independent of the above parameters.

Throughout the paperθ will denote a fixed real number belonging to the interval(0, 1)which can be different in different formulas. Moreover,Pmis the space of all algebraic polynomials of degree at mostm.

Let{pm(wβ)}mbe the sequence of orthonormal polynomials w.r.t. the weightwβ(x) =e−|x|β,β >1, with positive leading coefficients, i.e.

pm(wβ,x) =γm(wβ)xm+ terms of lower degree, γm(wβ)>0.

Denote byxm,k,k=1, . . . ,m

2

, the positive zeros ofpm(wβ)and byxm,−k=−xm,kthe negative ones, settingxm,0=0 form odd. The zeros ofpm(wβ)satisfies

−am<x

m,m

2

<· · ·<xm,1<· · ·<xm,1<xm,2<· · ·<x

m,m

2

<am, wheream:=am(wβ)is the Mhaskar-Rachmanoff-Saff number w.r.t.wβ(in the sequel M-R-S number).

2.1 Function spaces and best approximation estimates Settingu(x) =e|x|

β

2 ,β >1, we define the function space Cu = n

f : f uC0(R), lim

x→±∞f(x)u(x) =0o ,

(3)

equipped with the norm

kfkCu:=kf uk:=sup

x∈R|f(x)|u(x).

In the sequel we will writekf ukE=supxE|f(x)|u(x)for anyE⊂R. For smoother functions, we denote byWs(u),s∈Z+, the Sobolev space

Ws(u) =

fCu:f(s1)AC(R):kf(s)uk<∞ ,

whereAC(R)denotes the set of the functions which are absolutely continuous on every closed subset ofR, equipped with the norm

kfkWs(u)=kf uk+kf(s)uk.

Denoting by

Em(f)u= inf

P∈Pmk(fP)uk

the error of the best approximation inCu, the following weaker version of the Jackson theorem holds Em(f)u ≤ C

Z amm

0

r(f,t)u

t d t, C6=C(m,f), (3)

wheream:=am(u)denotes the M-R-S number w.r.tuand r(f,t)u= sup

0<htk(∆hrf)ukIrh,

is the main part of ther−th modulus of continuity (see[9]), withIrh= [−Arhβ−11 ,Arhβ−11 ],A>0 a constant and

hf(x) =f

 x+h

2

‹

f

 xh

2

‹

, hr=h(∆hr1).

Sinceam(u)∼am(wβ)∼mβ1, throughout the paper we employ the same symbolamto denote both of them.

By means of the main part of ther−th modulus of continuity, the following Zygmund spaceZs(u), withs∈R+, can be defined (see[27])

Zs(u) = §

fCu: sup

t>0

r(f,t)u

ts <∞, r>s ª

, equipped with the norm

kfkZs(u)=kf uk+sup

t>0

r(f,t)u

ts , r>s.

By (3) it is easy to deduce

Em(f)u≤C am

m s

kfkWs(u), ∀fWs(u), s∈Z+, C6=C(m,f), (4) Em(f)u≤C

am m s

kfkZs(u), ∀fZs(u), s∈R+, C6=C(m,f). (5) 2.2 Truncated Lagrange interpolation at Freud zeros

For any fixed 0< θ <1, let

xm,j:=xm,j(m)=min

§

xm,k:xm,kamθ, k=1, 2, ..,

›m 2

žª

. (6)

Let Lm+2(wβ,g)be the Lagrange polynomial interpolating a given functiongat the zeros of pm(wβ,x)(a2mx2)and denote by χjthe characteristic function of the interval(−xm,j,xm,j). The Lagrange polynomialLm+2(wβ,g):=Lm+2(wβ,j), introduced in [20], can take the following expression

Lm+2(wβ,g,x) =

j

X

k=−j

lm,k(x) a2mx2

a2mx2m,kf(xm,k) =:

j

X

k=−j

`m,k(x)f(xm,k), (7)

wherelm,k(x) = p0 pm(wβ,x)

m(wβ,xm,k)(xxm,k). The polynomialLm+2(wβ,g)belongs toPm+1 ⊂Pm+1, with Pm+1=

p∈Pm+1:p(xm,k) =pam) =0, |k|>j , and the operatorLm+2(wβ)projectsCuontoPm+ 1.

About the simultaneous approximation of a function and its successive derivatives, we state the following result.

Theorem 2.1. If fZp+λ(u), p∈N,0< λ <1, we have k(fLm+2(wβ,f))(k)uk ≤ CkfkZp+λ(u)logm

am m

p+λ−k

, k=1, . . . ,p, C6=C(m,f). In particular, if fWp+r(u), p∈N, r∈Z+,we get

k(fLm+2(wβ,f))(k)uk ≤ CkfkWp+r(u)logm am

m p+r−k

, k=1, . . . ,p, C6=C(m,f). (8)

(4)

2.3 Truncated Gauss-Freud rule

In[20](see also[7],[23]), it was introduced a truncated rule of the the following type Z

R

f(x)wβ(x)d x=X

|k|≤j

λm,kf(xm,k) +ρm(f), (9)

whereλm,k,k=−m

2

, . . . ,m

2

, are the Christoffel numbers w.r.t. the weightwβ andρm(f)is the remainder term. We recall the following result useful in the next and proved in[20]in a more general context.

Theorem 2.2. For any fCu

m(f)| ≤C EM(f)u+e−Amkf uk

, C6=C(m,f), A6=A(m,f), (10) where M=cm,0<c<1fixed.

3 The first method

Starting from the decomposition (1), lettbe fixed in the interval[−xm,j,xm,j], where the index jhas been defined in (6). Since there exist efficient routines to computeHp−k(wβ,t)for some choices ofβ(see Section 5), we focus our attention to the term Fp(f,t)assumingf s.t. the integral exists as an improper Riemann integral. By using the truncated Gauss-Freud rule in (9) we have

Fp(f,t) =X

|k|≤j

λm,k

f(xm,k)−Pp k=0

f(k)(t)

k! (xm,kt)pk

(xm,kt)p+1 +ep,m(f,t) =:Fp,m(f,t) +ep,m(f,t).

However, as observed also in[25], formulae of this kind are very appealing since its coefficients have a very simple form, but they can present severe numerical cancellation whentis “close” to one of the Gaussian nodes.

Thus, to overcome this problem we propose to select, for any fixedt, a proper subsequence of the truncated Gauss-Freud sequence{Fp,m(f,t)}m, in such a way that the distances|xm,kt|,k=−j, . . . ,j, are always large enough.

Formsufficiently large (saymm0∈N), there exists an indexds.t.xm,dt<xm,d+1. Since the zeros ofpm(wβ)interlace those ofpm+1(wβ), two cases are possible:

(1) xm,dt<xm+1,d+1, (2) xm+1,d+1t<xm,d+1. In the case (1), if

t

xm,d, xm+1,d+1+xm,d 2

•

then we approximateFp(f,t)withFp,m+1(f,t)and we setm=m+1, while if t

•xm+1,d+1+xm,d

2 , xm+1,d+1

then we approximateFp(f,t)withFp,m(f,t)and we setm=m. Similarly we proceed in the case (2).

By this way we have

Fp(f,t) := Fp,m(f,t) +ep,m(f,t), whereep,m(f,t)is the remainder term and therefore

Hp(f wβ,t) := Fp,m(f,t) +

p

X

k=0

f(k)(t)

k! Hpk(wβ,t) +ep,m(f,t). (11) In order to highlight the possible instability of the ordinary truncated Gaussian rule, we want to propose just an example.

Consider the integral

H1(f wβ),t) = Z

=

R

sin(x+5) (xt)2 ex2d x.

In Figure 1 we compare the absolute errors obtained by implementing the truncated Gauss-Freud rule{Fp,m(f,t)}m(red curve) and the modified Gauss-Freud rule{Fp,m(f,t)}m(blue curve), for increasing values ofm.

As the graph shows, the highest errors are attained by using the ordinary truncated Gaussian rule and, for the samet, also for different values ofm. Since the functionf is smooth, this bad behavior depends on the closeness oftto some of the Gaussian abscissae. The shortcoming is successfully overcome by using the method in (11).

Next theorem deals with the stability and the convergence of the proposed quadrature rule.

Theorem 3.1. Let p∈N0,0< λ <1and0< θ <1fixed. For any fixed t∈(−θamam), if fZp+λ(u)then

u(t)|Fp,m(f,t)| ≤ CkfkZp+λ(u)logm (12) and

u(t)|Fp(f,t)−Fp,m(f,t)| ≤ CkfkZp+λ(u)logm am

m λ

, (13)

where0<C6=C(m,f,t).

(5)

Figure 1:Comparison between the absolute errors of the rulesFp,m(f,t)andFp,m(f,t)

For smoother functions the following result holds.

Corollary 3.2. Let p∈N0, r∈N,0< λ <1and0< θ <1fixed. For any fixed t∈(−θam,θam), if fZp+r+λ(u)then u(t)|Fp(f,t)−Fp,m(f,t)| ≤ CkfkZp+r+λ(u)logm

am m

r+λ

, 0<C6=C(m,f,t), (14) and if fWp+r(u)then

u(t)|Fp(f,t)−Fp,m(f,t)| ≤ CkfkWp+r(u)logm am

m r

, 0<C6=C(m,f,t). 3.1 The case t“large”

The method in (11) can be used whentstays between two Gaussian nodes, i.e. for|t|< θam, or equivalently,m> |t|θβ

. Thus, the “larger”tis, the “larger”mwill be. For instance, for t=100,β=2 andθ =12, it should be chosenm>2500. In such cases the computation of weights and nodes in the Gaussian rule is too much expensive, and sometimes unfeasible. For all these reasons, we propose here an alternative procedure, which is essentially a shrewd application of the Gaussian rule again, in the sense we go to precise. For any fixedt, letmbe such that

|t|>xm,j+1,

wherejis defined in (6). SettingGt(x) =(x−t)f(x)p+1, we approximate the integral by them−th truncated Gauss-Freud rule, i.e.

Hp(f wβ,t) =X

|i|≤j

Gt(xm,im,i+Rm(Gt). (15)

Since f andGt(f)for|t|>xm,j+1 have the same smoothness, by (10) and (4), iffWr(u),r∈N, we get

|Rm(Gt)| ≤C

kG(r)t ukam m

r

+eAmkGtuk

, (16)

where 0<C6=C(m,Gt)and 0<A6=A(m,Gt).

We observe that this procedure can be successfully applied whenf is “smooth” andtis “large”, say|t|>xm,j+1, wherejis defined in (6). Moreover, smoother is the functionf and smaller is the value ofmto achieve the desired accuracy. Of course, the error bound in (16) holds for a fixedmand therefore the limit onmofRm(Ft)has no meaning.

In conclusion, according to the position oftwe use (11) or (15), i.e.

Hp(f,t) =Ψp,m(f,t) +τp,m(f,t), (17) where

Ψp,m(t) =













Fp,m(f,t) +

p

X

k=0

f(k)(t)

k! Hp−k(wβ,t), |t| ≤θam,

X

|i|≤j

Gt(xm,im,i+

p

X

k=0

f(k)(t)

k! Hpk(wβ,t), |t|>xm,j+1, approximatingHp(f wβ,t)for a “wide” range oft.

(6)

4 The second method

LetF0(f)be defined in (2). For any fixedθ∈(0, 1), denoting by{xm+1,k}b

m+1 2 c

k=−bm+12 cthe zeros ofpm+1(wβ), let us define the index q:=q(m+1)by

xm+1,q= min

k=1,2,..,bm+12 c

xm+1,k:xm+1,kθam+1 , and set

F0,m+1(f,t) =

q

X

i=−q

λm+1,i

f(xm+1,i)−f(t)

xm+1,it . (18)

By approximatingF0(f)by the Lagrange polynomialLm+2(wβ,F0(f))defined in (7), in view of (1) Fp(f,t) = 1

p!L(p)m+2(wβ,F0(f),t) +ηp,m(f,t).

However, since the samplesF0(f,xm,k)are integrals containing the Freud weightwβ, we use formula (18) to evaluate them, i.e.

Fp(f,t) = 1

p!Lm+2(p) (wβ,F0,m+1(f),t) +ρp,m(f,t).

Therefore we set

Hp(f wβ,t) = 1

p!L(p)m+2(wβ,F0,m+1(f),t) +

p

X

k=0

f(k)(t)

k! Hpk(wβ,t) +ρp,m(f,t)

=: Φp,m(f,t) +ρp,m(f,t), (19)

whereρp,m(f,t)is the remainder term.

We remark that in view of[27, Proposition 1], Gaussian knots and interpolation nodes are sufficiently far among them. This good “distance" prevents numerical cancellation when computingxm+1,ixm,k. About the stability and the convergence of the procedure, we are able to prove the following results

Theorem 4.1. Let p∈N0and0< λ <1. Then, for any function f ∈Zp+λ(u)

p,m(f)uk ≤CkfkZp+λ(u)log2m, 0<C6=C(m,f). (20) Moreover, if fZp+r+λ(u),r∈N, then

p,m(f)uk ≤CkfkZp+r+λ(u)log2m am

m r

, 0<C6=C(m,f). (21)

In particular, forf belonging to Sobolev spaces, we get

Corollary 4.2. Let p∈N0and r∈N. Then, for any function fWp+1(u)

p,m(f)uk ≤CkfkWp+1(u)log2m, 0<C6=C(m,f), and if fWp+r+1(u)then

p,m(f)uk ≤CkfkWp+r+1(u)log2m am

m r

, 0<C6=C(m,f).

As we have announced, it is possible to implement the method without computing the derivatives{f(k)(t)}k=1p . Indeed, we can approximatef(k)(t)withL(k)m+2(wβ,f,t)fork=0, 1, 2, . . . ,p, whereLm+2(wβ,f)is the truncated Lagrange polynomial defined in (7). By this way, reusing the same samples involved in the evaluation ofΦp,m(f,t)and taking into account thatHp−k(w,t)can be computed with the desired accuracy, we get

Hp(f wβ,t) = Φp,m(f,t) +

p

X

k=0

p k

‹

L(k)m+2(wβ,f,t)Hpk(wβ,t) +Tp,m(f,t), (22) where

Tp,m(f,t) = ρp,m(f,t) +

p

X

k=0

p k

‹

τk,m(f,t)Hp−k(wβ,t), with

τk,m(f,t) = (f(t)−Lm+2(wβ,f,t))(k). The next theorem deals with the pointwise estimate of the errorTp,m(f,t):

Theorem 4.3. Let p≥0. Then, for any function f ∈Zp+r+λ(u)with0< λ <1, r≥1, we have

|Tp,m(f,t)u(t)| ≤ C

|t|pkfkZp+r+λ(u)

am m

r

log2m, 0<C6=C(m,f). Remark1. For|t| ≥C,Cbeing an arbitrary positive constant, we get

|Tp,m(f,t)u(t)| ≤CkfkZp+r+λ(u)

am m

r

log2m, 0<C6=C(m,f).

(7)

5 Computational aspects

First of all, we point out that sincewβ(x) =e−|x|β,β >1, is not a classical weight, the coefficients of the three-term recurrence relation of the corresponding orthonormal polynomials are unknown, in the general case. However, it is possible to calculate them, for instance, by the Chebyshev algorithm, which requires the moments

µk= Z

R

xkwβ(x)d x, k=0, 1, . . . .

Even if the algorithm suffers of ill conditioning, we have successfully implemented it inWolfram Mathematica Language, in extended arithmetic with high accuracy by using the functionsaChebyshevAlgorithmandaGaussianNodesWeightsof the software packageOrthogonalPolynomialsby Cvetkovic and G.V. Milovanovic (see[2]).

Now we give some details about the computation of the Hadamard transforms of the weightwβ(x) =e−|x|β. In order to use the relation

Hp−k(wβ,t):= 1 (pk)!

dp−k d tpk

Z

R

wβ(x)

xt d x, (23)

we first deduce the expression ofH0(wβ,t),k= 0, 1, . . . ,p, for some choices of the parameterβ >1. SinceH0(wβ,t) =

−H0(wβ,−t), we assumet>0. Then, we have H0(wβ,t) =

Z

+∞

0

e−xβ xtd x

Z +∞

0

e−xβ x+td x= 1

β

¨Z

+∞

0

e−xx1β1 x1βt

d x− Z +∞

0

e−xx1β1 x1β +t

d x

« . In the caseβ=p/q,p,q∈N,p>q, we have forpeven

H0(wβ,t) =2q p

p

X2 j=1

t2j1 Z

+∞

0

xqp(p2j+1)−1

xqtp exd x (24)

and forpodd

H0(wβ,t) =q p

p

X

j=1

tj1

¨Z

+∞

0

xqp(pj+1)−1

xqtp exd x+ (−1)j Z +∞

0

xqp(pj+1)−1 xq+tp exd x

«

. (25)

Integrals in (24), (25) can be found in a closed form for some values ofq. In particular forq=1, taking into account[30, p. 325 n. 16]

Z

+∞

0

exxα

xt d x = −πtαe−tcot((1+α)π) +Γ(α)1F1(1, 1−α,−t), α6=0 Z

+∞

0

e−x

xtd x = −e−tEi(t), and[10, p.217, n.(17)]

Z+∞

0

exxα

x+td x=Γ(1+α)tαetΓ(−α,t),

where1F1(a,b,x)is the Confluent Hypergeometric function, Ei(t)is the exponential integral function, andΓ(a,b)is the incomplete Gamma function, we get forpeven

H0(wβ,t) = 2 p

p

X2 j=1

t2j−1 Z

+∞

0

e−xx1−2jp xtp d x

= −2π p

p 21

X

j=1

etpcot



1+1−2j p

‹ π

‹ +Γ

1−2j p

‹

1F1



1, 1−1−2j p ,−tp

‹

−1

petpEi(tp) and forpodd

H0(wβ,t) = 1 p

p

X

j=1

tj1

¨Z

+∞

0

e−xx1−jp

xtp d x+ (−1)j Z +∞

0

e−xx1−jp x+tp d x

«

= −1

pe−tpEi(tp) +1 p

p

X

j=2

•

−πe−tpcot



1+1−j p

‹ π‹

+tj−1Á1−j p

‹

1F1



1, 1−1−j p ,−tp

‹˜

+ 1

p

p

X

j=1

(−1)jΓ

 1+1−j

p

‹ etpΓ



−1−j p ,tp

‹ .

(8)

Thus, taking into account[13, p. 1086, 9.213]

d

d tEi(t) =−d d t

Z+∞

−t

ex x d x=et

t , d

d t1F1(a,b;t) = a

b1F1(a+1,b+1,t),

by (23), the functions{Hpk(wβ,t)}pk=0are completely determined. We point out that the computation of the functionEiand1F1 have been performed by theWolfram MathematicaroutinesExpIntegralEiandHypergeometric1F1, respectively.

In the caseq=2 andq=4, the functionH0(wβ)can be obtained by combining integrals in[13, 3.354.1 p. 359 and 3.358 p.

361].

Finally, we give some details about the computation of the derivatives of the fundamental Lagrange polynomials{`(i)m,k(t)}jk=−j, i=1, 2, . . . ,p. Setting

Sr,k(t) = X

|h| ≤m

2

h6=k

r!

[(txm,h)(t2a2m)]r+1, t6=±am, t6={xm,h}bm2c

h=−bm2c,

it is no hard to prove that

`(i)m,k(t) =

i1

X

r=0

i−1 r

‹

(−1)r`(i−m,k1−r)(t)Sr,k(t), for−jkjandi=1, 2, . . . ,p.

6 Numerical Tests

In this section we propose some numerical experiments obtained by implementing the above introduced methods. To be more precise, we show the numerical results obtained approximatingHp(wβ,f)by the sequence{Ψp,m(f,t)}mof the first method (17) and by{Φp,m(f,t)}mof the second method (19). Since the exact values of the integrals are unknown, we will retain exact the values computed withm=1000, setting

τ¯p,m(f,t) =u(t)|Ψp,m(f,t)−Ψp,1000(f,t)|, ρ¯p,m(f,t) =u(t)|Φp,m(f,t)−Φp,1000(f,t)|, as weighted absolute errors.

The “truncation intervals”, depending on a fixedθ, have been empirically detected. In particular, denoting byepsthe machine precision, in the first method we set

j:=j1j2

with

j1:=









 max

k=0,...,bm∗2c

(

k : λm,k

f(xm,k)−Pp k=0

f(k)(t)

k! (xm,kt)pk (xm,kt)p+1

eps )

in (11)

max

k=0,...,bm∗2c

k : |Gt(xm,k)|λm,keps in (15)

j2:=









 min

k=−bm∗2c,...,1

(

k : λm,k

f(xm,k)−Pp k=0

f(k)(t)

k! (xm,kt)p−k (xm,kt)p+1

eps )

in (11)

min

k=−bm∗2c,...,1

k : |Gt(xm,k)|λm,keps in (15)

and for the second method in (19), setting

¯

qk:= max

i=0,...,bm+12 c

i :

λm+1,i

f(xm+1,i)−f(xm,k) xm+1,ixm,k

eps

,

ˆ

qk:= min

i=−bm+12 c,...,1

i :

λm+1,i

f(xm+1,i)−f(xm,k) xm+1,ixm,k

eps

,

¯

q:= max

k=j2,...,j1q¯k, ˆq:= min

k=j2,...,j1ˆqk, ˆı1:= max

k=0,...,bm2c

( k :

`(p)m,k(t)

¯qk

X

i=ˆqk

λm+1,i

f(xm+1,i)−f(xm,k) xm+1,ixm,k

eps )

,

ˆı2:= min

k=−bm2c,...,1

( k :

`(p)m,k(t)

¯ qk

X

i=ˆqk

λm+1,i

f(xm+1,i)−f(xm,k) xm+1,ixm,k

eps )

,

(9)

we let

ˆı:=ˆı1−ˆı2, q:=q¯−ˆq.

We point out that for the simultaneous approximation ofH0(f wβ,t), . . . ,Hp(f wβ,t)atsvalues oft, the first method requires j+ (p+1)ssamples of the density functionf, while the second needsqı+ (p+1)sevaluations off. In both the cases, the effort doesn’t depend ons.

In the numerical tests proposed below, we compare our results among them and with those achieved by a method discussed in[25, p. 14-15]for finite part integrals on bounded intervals. Thus we have properly adapted this procedure, in the sense we go to specify. Starting from the decomposition

Hp(f wβ,t) = Fp(f,t) +

p

X

k=0

f(k)(t)

k! Hpk(wβ,t),

by an idea in[15](see also[25]), to approximateFp(f,t), the interval(−∞,+∞)is broken up into(−∞,t)and(t,+∞)

Fp(f,t) = Z+∞

0

f(ty)−Pp k=0

f(k)(t) k! (−y)k

(−y)p+1 e−|ty|β+yeyd y+ Z+∞

0

f(y+t)−Pp k=0

f(k)(t) k! (y)k

(y)p+1 e−|y+t|β+yeyd y, and using them-th truncated Gauss-Laguerre rule w.r.t. the weightey[19], we obtain

Hp(f wβ,t) =

k1

X

i=1

f(tym,i)−Pp k=0

f(k)(t) k! (−ym,i)k

(−ym,i)p+1 e−|tym,i|β+ym,iλ¯m,i+

k2

X

i=1

f(ym,i+t)−Pp k=0

f(k)(t) k! (ym,i)k

(ym,i)p+1 e−|ym,it|β+ym,iλ¯m,i

+

p

X

k=0

f(k)(t)

k! Hpk(wβ,t) +σp,m(f,t)

:= Jp,m(f,t) +σp,m(f,t), (26)

whereym,iand ¯λm,i,i=1, . . . ,m, are the knots and the weights of them−th Gauss-Laguerre rule and the parametersk1andk2 are empirically detected with criteria similar to those used for the rule (11). We will set

σˆp,m(f,t) =|Jp,m(f,t)−Jp,1000(f,t)|.

We point out that by implementing formula (26), for any value ofttwo changes of variables depending ontare required and none of the samples of the functionf can be saved. This means that the computation{Hi(f,t)}pi=0atsvalues oftrequires almosts(k1+k2+p+1)samples off, which is far greater than the number of samples required in our methods.

All the computations have been performed in double-machine precision (eps≈2.22044e−16) and in all the tables the symbol “-” means that the machine accuracy has been achieved.

Example6.1.

Hp(f w2,t) = Z

R

sin x2

cos(xe)

(xt)p+1 ex2d x, p=0, 1, 2.

Since the function f(x) =sin x2

cos(xe)is very smooth we expect for a fast convergence. As the test shows, the sequence {Jp,m(f,t)}mappears slower convergent w.r.t. both the sequences{Ψp,m(f,t)}mand{Φp,m(f,t)}m. Indeed, looking at Tables 1-2, H0(f;t),H1(f;t),H2(f;t)are computed for different values oft, attaining the machine precision withm=30 in the first method and withm=40 in the second one. On the contrary, as Table 3 evidences, by the method (26) the machine precision is not always achieved form=160.

In Figure 2 we show the graphs of the approximating functionsΦp,40(f,t),p=0, 1, 2.

m j τ¯0,m(f,−3) τ¯0,m(f,−0.5) τ¯0,m(f, 4) τ¯0,m(f, 10)

10 8 5.11e−7 3.67e−6 1.07e−12 −

20 18 8.50e−16 1.55e−14 − −

30 27 − − − −

m j τ¯1,m(f,−3) τ¯1,m(f,−0.5) τ¯1,m(f, 4) τ¯1,m(f, 10)

10 8 9.71e−6 1.01e−6 5.87e−9 −

20 18 8.47e−16 7.90e−15 − −

30 27 − − − −

m j τ¯2,m(f,−3) τ¯2,m(f,−0.5) τ¯2,m(f, 4) τ¯2,m(f, 10)

10 8 5.85e−5 8.35e−7 1.51e−6 −

20 18 − 4.77e−15 − −

30 27 − − − −

Table 1:Example6.1: ¯τp,m(f,t),p=0, 1, 2, witht=−3,−0.5, 4, 10

参照

関連したドキュメント

Key words: Dunkl operators, Dunkl transform, Dunkl translation operators, Dunkl convolu- tion, Besov-Dunkl spaces.. Abstract: In this paper, we define subspaces of L p by

Eskandani, “Stability of a mixed additive and cubic functional equation in quasi- Banach spaces,” Journal of Mathematical Analysis and Applications, vol.. Eshaghi Gordji, “Stability

Using right instead of left singular vectors for these examples leads to the same number of blocks in the first example, although of different size and, hence, with a different

The first case is the Whitham equation, where numerical evidence points to the conclusion that the main bifurcation branch features three distinct points of interest, namely a

This review is devoted to the optimal with respect to accuracy algorithms of the calculation of singular integrals with fixed singu- larity, Cauchy and Hilbert kernels, polysingular

This review is devoted to the optimal with respect to accuracy algorithms of the calculation of singular integrals with fixed singu- larity, Cauchy and Hilbert kernels, polysingular

This review is devoted to the optimal with respect to accuracy algorithms of the calculation of singular integrals with fixed singu- larity, Cauchy and Hilbert kernels, polysingular

We provide an efficient formula for the colored Jones function of the simplest hyperbolic non-2-bridge knot, and using this formula, we provide numerical evidence for the