• 検索結果がありません。

Two Generalizations of the Projected Gradient Method for Convexly Constrained Inverse Problems : Hybrid steepest descent method, Adaptive projected subgradient method (Numerical Analysis and New Information Technology)

N/A
N/A
Protected

Academic year: 2021

シェア "Two Generalizations of the Projected Gradient Method for Convexly Constrained Inverse Problems : Hybrid steepest descent method, Adaptive projected subgradient method (Numerical Analysis and New Information Technology)"

Copied!
7
0
0

読み込み中.... (全文を見る)

全文

(1)

88

Two

Generalizations of the

Projected Gradient

Method

for

Convexly

Constrained

Inverse

Problems

Hybrid

steepest

descent method, Adaptive projected subgradient

method

山田功 (IsaoYamada)\dagger 小倉信彦 (NobuhikoOgura ) $\mathrm{f}$

\dagger Dept. ofCommunications& Integrated Systems, Tokyo InstituteofTechnology \ddagger Precision and Intelligence Laboratory, Tokyo Institute ofTechnology

Abstract Inthispaper,wepresent abrief reviewonthe centralresults of two generalizations

ofa classicalconvex optimizationtechniquenamed the projected gradient method$[1, 2]$. Thc 1st

generalizationhasbeen made byextendingtheconvexprojection operator, usedinthe projected

gradient method,to the (quasi-)nonexpansive mappingin areal Hilbert space. By this

general-ization,wededucethe hybrid steepest descent method[3-10] (see also [11]) thatcanminimize the

convex cost functionoverthefixedpoint setof nonexpansive mapping [3-9, 11] (theseresults can

also be interpreted as generalizations offixed point iterations found for example in [12-15]$)$ or,

more generally, over the fixed point set ofquasi-nonexpansive mapping [10]. Since (i) the

solu-tion set ofwiderange of convexly constrained inverse problems, for example in signal processing

and image reconstruction, can be characterized as the fixed point set of certain nonexpansive

mapping [5,6, 9,16-18], and (ii) subgradient projection operator and its variations are typical

examples of quasi-nonexpansive mapping $[10_{i}19]$, the hybrid steepest descent method has rich

applications in broad rangeof mathematical sciences and engineerings. The 2nd generalization

has beenmade forthe Polyak ’s subgradient algorithm[20] thatwas originally developedas a

ver-sion, of the projected gradient method, for unsmooth convex optimization problemwith a fixed

target value. By extending the Polyak ’s subgradient algorithm to thecasewhere theconvex cost

functionitself keeps changing in thewholeprocess, wededuce the adaptive projected subgradient

method [21-23] thatcan minimize asymptotically the sequence of unsmooth nonnegativeconvex

cost functions. The adaptive projectedsubgradientmethodcanserve as aunifiedguiding principle

ofawide rangeofset theore$tic$ adaptive filteringschemes [24-30] fornonstationary random

pr0-cesses. Thegreat flexibilities in thechoiceof$(\mathrm{q}\iota \mathrm{l}\mathrm{a}\mathrm{s}\mathrm{i}-)\mathrm{n}\mathrm{o}\mathrm{n}\mathrm{e}\mathrm{x}\mathrm{p}\mathrm{a}\mathrm{n}\mathrm{s}\mathrm{i}\mathrm{v}\mathrm{e}$mappingaswell asunsmooth

convex cost functions in the oposed methods yield naturally inherently parallel structures (in

the senseof [31]$)$.

1

Preliminaries

Let $H$ be

a

real Hilbert space equipped with aninner product $\langle$

.,

$\cdot\rangle$ and its induced

norm

$||$ $||$. For a continuous

convex

function $\Phi$ : $H$ $arrow \mathbb{R}$, the $s\prime ubdiffer\cdot ential$ of $\Phi$ at $\forall y\in?\{$,

the set ofall subgradients of$\Phi$at $y:\partial\Phi(y):=\{g\in \mathrm{H} | \langle x-y, g\rangle+\Phi(y)\leq\Phi(x), \forall x\in H\}$

is nonempty. The

convex

function $\Phi$ : $\mathcal{H}arrow \mathbb{R}$ has a unique subgradient at $y\in \mathcal{H}$ if

$\Phi$ is Gateaux differentiate at

$y$. This unique subgradient is nothing but the Gateaux

differential $\mathrm{D}’(\mathrm{y})$. A

fixed

point of

a

mapping $T$ : $\mathcal{H}$ ” $\mathcal{H}$ is a point $x\in$ it such that

$T(x)=x.$

Fix{T)

$:=\{x\in \mathrm{H} |T(x)=x\}$ denotes the fixed point set of$T$

.

A mapping

$T:itarrow 7?$ iscalled (i) strictly contractive if $||$$7$ $(x)-7$$(y)||\leq\kappa||x-y||$ for

some

$\kappa$ $\in(0,1)$

and all $x$,$y\in$ ? [The Banach-Picard

fixed

point theorem guarantees theunique existence

of the fixed point, say $x$

.

$\in$ Fix(T), of$T$ and the strong convergence of $(T^{n}(x_{0}))_{n\geq 0}$ to tThis workwassupported inpart byJSPS grants-in-Aid (C15500129).

(2)

$\epsilon\epsilon$

$x$

.

for any $x_{0}\in \mathcal{H}$.] ;(ii) nonexpansive if $||7(x)$ -T(y)$||\leq||x-y||$, $\forall x$,

$y\in$ ?t; (iii) firmly

nonexpansiveif $||7(x)-T(y)||^{2}\leq\langle x-y, T(x)-T(y)\rangle$, $1x$,$y\in$ $\mathrm{H}[32]$; and (iv) attracting

nonexpansiveif$T$isnonexpansive with Fix(Tl) $\neq\emptyset$and $||7$$(x)-7$$\mathrm{K}$ $||x-f||$,$\forall f\in$ Fix(T)

and $\forall x$ $\not\in$ Fix(T) [32]. Given a nonempty closed convex set $C\subset \mathcal{H}$, the mapping that

assigns every point in $H$ to its unique nearest point in $C$ is called the metric projection

or convex projection onto $C$ and is denoted by $\Gamma_{C}^{J}$; i.e., $||$” $-P_{C}(x)||=d(x, C)$, where

$d(x, C):= \inf_{y\in C}||x-y||$. $P_{c}$ is firmly nonexpansive with $Fix(P_{C})=C[32]$. A mapping

$T$ . $\mathcal{H}arrow$ ?l is called quasi-nonexpansive if $||$$7$$(x)$-7$(\mathrm{j})||\leq||x$$-f||$,$\forall(x, f)$ $\in H$$\cross$Fix(T).

In this paper, for simplicity, amapping 7: $\mathcal{H}arrow 7${ is called attracting quasi-nonexpansive

if Fix$(T)\neq\emptyset$ and $||$$7(x)$ –

$f||<|1$

$f||$, $\forall(x, 7)$ $\in Fix$$(T)c\cross$ Fix(T). Moreover, a

mapping $T$ : $Pt$ $arrow H$ is called $\alpha$-averaged quasi-nonexpansive if there exists a

$\in(0,1)$

and a quasi-nonexpansive mapping

A:

$H$ $arrow$ $7\mathrm{f}$ such that $T=(1-\alpha)I+\alpha N$ (Note:

Fix(T) $=Fix$($\Lambda\gamma$ holds automatically). In particular, 1/2-averaged quasi-nonexpansive

mapping, which we specially call firmly quasi-nonexpansive mapping. Suppose that a

continuous

convex

function $\mathrm{D}$

:it $arrow$ $\mathrm{R}$ satisfies

$1\mathrm{e}\mathrm{v}_{\leq 0}\Phi:=$

{

$x\in$ ?l $|\Phi(x)\leq 0$

}

$\neq \mathit{1}\mathit{1}.$ Then

amapping $T_{\epsilon p(\Phi)}$ : 7{ $” \mathrm{p}$$\mathcal{H}$ defined by

$T_{sp(\Phi)}$ : $x\mapsto\{$ $xx- \frac{\Phi(x)}{||g(x)||^{2}}g(x)$ if $\Phi(x)>0$ (1)

if$\Phi(x)\leq 0,$

where $g$ is a selection of the subdifferential $\partial\Phi$, is called a subgradient projection

rela-tive to (! [32].The mapping $7\mathrm{g}_{p(\Phi)}$ : $Pl$ $arrow \mathcal{H}$ is $fi$ rmly quasi-nonexpansive and satisfies

$F^{l}ix(T_{sp(\Phi)})=\mathrm{l}\mathrm{e}\mathrm{v}\leq 0\Phi$ (see for example [19]).

Figure 1: Subgradient projection relative to (!)

Amapping $\mathrm{i}$ : $\mathrm{h}$ $arrow$ h is called (i) monotoneover $5\subset?t$if

$\langle$$\mathcal{F}(u)-F$(v),$u-v\rangle$ $\geq 0,$

$lu$,$v\in S.$ In particular,

a

mapping 7 which is monotone

over

$S\subset$ ?t is called (ii) $par\cdot arnonotone$

over

$S$ if $\langle \mathcal{F}(u)- \mathrm{F}(\mathrm{T}) u-v\rangle$ $=0\Leftrightarrow \mathrm{F}(u)$ $=F$(v), Vu,$v\in S[34]$;

(iii) uniformly monotone

over

$S$ if there exists a strictly monotone increasing

continu-ous

function $a$ : $[0, \infty)arrow p$ $[0, \infty)$, with $a(0)=0$ and $a(t)arrow$p

oo

$\mathrm{o}\langle$$\mathcal{F}(u\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{K}-\mathrm{i}\mathrm{f}$

Fth

$(_{\mathrm{V}),u-v\rangle\geq a(||u-v||\mathrm{t}_{1}^{1}\mathrm{j}_{\mathrm{a}\mathrm{t}\langle \mathcal{F}(u)-\mathcal{F}(v),u-v}^{u-\tau||\mathrm{f}\mathrm{o}\mathrm{r}\mathrm{a}11u,v\in_{\mathrm{t}}9}}\mathrm{e}\mathrm{r}\mathrm{e}\mathrm{e}\mathrm{x}\mathrm{i}\mathrm{s}\mathrm{t}\mathrm{s}\eta>0\mathrm{s}\mathrm{u}\mathrm{c}\mathrm{h}’)[38]\geq$,$\cdot\eta\}$

[38].

as

$tarrow\infty$, satisfying

$\mathrm{i}\mathrm{v})\eta$-strongly

monotone

$|u-v||^{2}$ for all $u$,$v\in S$

The variational inequality problem$VIP(\mathcal{F}, C)$ is defined

as

follows: given $\mathrm{F}$ $:Itarrow 7($

which is monotone

over a

nonempty closed

convex

set $C\subset??$, find $u^{*}\in C$ such that

(3)

so

convex set $C$and G\^ateaux differentiate with derivative $\mathrm{O}-$, over an open set

$U\supset C,$ then

$\Theta’$ is paramonotone over $C$. For such a $\Theta$, the set $\Gamma:=\{u\in C|\Theta(u)=\inf\Theta(C)\}$ is

nothing but the solution set of $VIP(\Theta’, C)[33]$. Given $\mathrm{r}$ : $H$ $arrow H$ which is monotone

over a nonempty closed convex set $C$, $\prime u^{*}\in C$ is a solution of $I\mathrm{I}\mathrm{P}$(F,$C$) if and only if

$u^{*}\in Fix$$(P_{C}(I-\mu \mathcal{F}))$ foranarbitrarilyfixed$\mu>0$ (Forrelated mathematical discussion

in this section, the readers should consult, e.g., [6, 9, 19, 31-38]$)$.

2

Hybrid

Steepest

Descent Method

Theorem 1 (Strong convergence for nonexpansive mapping [6, 9]) Let 7: $\mathrm{H}$

$H$ beanonexpansivemappingwithFix$(T)\neq\emptyset$. Suppose that a mapping $\mathrm{r}$ : $H$ $arrow \mathcal{H}$is

ts-Lipschitzian and$\eta$-strongly’monotone over$T(H)$. Then, by usingany sequence $(\lambda_{n})_{n\geq 1}\subset$

$[0, \infty)$ satisfying (Wl) $\lim_{narrow+\infty}\lambda_{n}=0$, (W2) $\sum_{n\geq 1}\lambda_{n}=+\mathrm{o}\mathrm{o}$, $(W \mathit{3})\sum_{n>1}|$A

$n$

- $\mathrm{X}n+1|<$

$+\mathrm{o}\mathrm{o}$ [or $(\lambda_{n})_{n\geq 1}\subset(0, \infty)$ satisfying (W1) $\lim_{narrow+\infty}\lambda_{n}=0$, (W2) $\sum_{n\geq 1}\overline{\lambda}_{n}=+\mathrm{o}\mathrm{o}$, (L3)

$\lim_{narrow\infty}(\lambda_{n}-\lambda_{n+1})\lambda_{n+1}^{-\Delta}.=0]$, the sequence $(u_{n})_{n\geq 0}$ generated, with arbitrary$u_{0}\in \mathcal{H}$, by

$u_{n+1}:=T(u_{n})-\lambda_{n+1}\mathcal{F}(T(u_{n}))$ (2)

converges strongly to the uniquely existing solution of the $VIP$: find $u^{*}\in$ Fix(T) such

that $\langle v-u^{*}, \mathcal{F}(u^{*})\rangle\geq 0$, $/v$ $\in$ Fix(T). (Note: The condition (L3)

was

relaxed recently to $\lim_{narrow\infty}\lambda_{n}\lambda \mathit{3}=1[11].)$ $\square$

Theorem 1 isageneralizationofa fixedpointiteration [12-15]

so

called the anchor method:

$u_{n+1}:=\lambda_{n+1}a+$ $(1-\lambda_{n+1})T(u_{n})$,

which converges strongly to $P_{Fix(T)}(a)$.

The hybrid steepest descent method (2) can be applied to

more

genaral monotone

op-erators $[7, 8]$ if$\dim(H)<\infty$. Moreover, by the use of slowly changing sequence of

nonex-pansivemappings having

same

fixed point sets, avariation ofthe hybrid steepest descent

method is gifted with notable robustness to the numerical

errors

possibly unavoidable in

the iterative computations [9].

Thenext theorem showsthat thehybrid steepest descent methodcanalsobeappliedto

thevariational inequality problem

over

thefixedpointset ofquasi-nonexpansive mappings.

Definition 2 (Quasi-shrinking mapping[10]) Suppose that $T:itarrow 7?$ is quasi-

non-expansive vvith Fix(T)$)\cap C\neq\emptyset$ for

some

closed

convex

set $C\subset$ it. Then 7: $?\{arrow$ ?t is

called quasi-shrinking on $C(\subset \mathcal{H})$ if

$D$ : $r\in[0, \infty)\mapsto\{$

inf $d$($u$,Fix(T))-d(T(u), Fix(T))

$u\in\triangleright(F\cdot x(T),r)\cap C$

$if\triangleright$($\Gamma\sqrt$ix(T),

$r$) $\cap C\neq\emptyset$

oo otherwise

satisfies

$D(\prime r)=0$

a

$r$. $=0,$ where $\triangleright$ Fix(T) $r)$ $:=$

{

$x\in$ $\mathrm{H}$ $|\mathrm{d}(\mathrm{x}$, Fix(T) $\geq r$

}.

$\square$

Proposition 3 [10] Supposethat a continuousconvex function $\Phi$ : $\#?arrow \mathbb{R}$has$lev_{\leq 0}\neq\emptyset$

and bounded subdiferential $\partial\Phi$ : $?t$ $arrow 2^{?t}$, i.e., $\partial\Phi$ maps bounded sets

to

bounded sets.

Define $T_{\alpha}:=(1-\alpha)I+\alpha T_{\epsilon p}(\Phi)$ for

a

$\in(0,2)$ [hence Fix(T\mbox{\boldmath $\alpha$}) $=lev_{\leq 0}\Phi$:

see

(1) for the

definition

of$T_{sp(\Phi)}$]. Then,

we

have the followings:

(a) Ifa selection of subgradient of$\Phi$, say$\Phi’$ : $\mathrm{H}$ $arrow H,$ is uniformlymonotone

over

$?t$,

then $T_{\alpha}$ is quasi-shrinkingon any nonempty bounded closed convex set $C$satisfying

(4)

91

(b) Assume $\dim(?t)<$ $\mathrm{c}\mathrm{x})$

.

Then$T_{\alpha}$ is quasi-shrinking on anynonemptybounded closed

convex

set $C(\subset \mathrm{h})$ satisfying $C\cap$lev\leq 0$\Phi\neq\emptyset$. $\square$

Theorem 4 (Strong

convergence

for quasi-shrinking mapping $[10]\mathrm{J}$ Suppose that

7: $\mathcal{H}arrow$ $7\mathrm{f}$ is a quasi-nonexpansive mapping with

Fix{T)

$\neq\emptyset$. Let $\mathrm{F}$ : $H$ $arrow H$ be

$\kappa-$

Lipschitzian and $\eta$-strongly monotone over$T(\mathit{7})$ [Hence $VIP$($\mathcal{F}$, Fix(T)) has its unique

solution$u^{*}\in Fix(T)]$. Suppose also that there exists$SOl\mathit{1}\mathit{1}\mathit{6}$ $( \int, u_{0})\in Fix(T)\cross H$for which

$T$is quasi-shrinking on

$C_{f}(u_{0}):=\{x\in \mathcal{H}|||x-f1$ $\leq R_{f}:=\max(||u_{0}-f||,$$\frac{||\mu \mathcal{F}(f)||}{1-\sqrt{1-\mu(2\eta-\mu\kappa^{2})}})\}$

Then for any$\mu\in(0, \frac{2\eta}{\kappa^{2}})$ andany$(\lambda_{n})_{n\geq 1}\subset[0,1]$ satisfying(Hl) $\lim_{narrow\infty}\lambda_{n}=0,$ and (H2)

$\mathrm{p}_{n\geq 1}$ $\lambda_{n}=\infty$, the sequence $(u_{n})_{n\geq 0}$, generated by

$u_{n+1}:=T(u_{n})-\lambda_{n+1}\mu \mathcal{F}(T(u_{n}))$,

converges strongiy to $u’$

.

$\square$

If$\dim(H)<$ $\mathrm{c}\mathrm{x})$ in away similar tothe discussions in [7-9], we can generalizeTheorem

4 for application to more genaral monotone operators [10].

3

Adaptive Projected Subgradient

Method

Theorem 5 (Adaptive Projected Subgradient Method [21,22]) Let$\Theta_{n}$ : $Pt$ $arrow[0, \infty)$

(in $\in$ N) be a sequence of continuous convex functions and $K\subset H$ a nonempty closed convex set. For an arbitrarily given $u_{0}\in K,$ the adaptive projected subgradient method

produces a sequence $(u_{n})_{n\in \mathrm{N}}^{\}\subset K$ by

$u_{n+1}:=\{$

$P_{K}(u_{n}- \lambda_{n},\frac{\Theta_{n}(u_{n})}{||\Theta_{n}(u_{n})||^{2}}\Theta_{n}’(u_{n}))$ $if\ominus_{n}’(u_{n})\neq 0,$

$u_{n}$ othenvise,

where $\Theta_{n}’(u_{n})\in\partial\Theta_{n}(u_{n})$ and 0 $\leq\lambda_{n}\leq$ 2. Then the sequence $(u_{n})_{n\in \mathrm{N}}$

satisfies

the

followings.

(a) (Monotone approximation) Suppose that

$u_{n}\not\in\Omega_{n}:=\{u\in K|\Theta_{n}(u)=\Theta_{n}^{*}\}\neq\emptyset$ ,

where$\Theta_{n}^{*}:=\inf_{u\in K}\Theta_{n}(u)$. 1 Then, by using VAn $\in(0,2$

(

$1- \frac{\mathrm{e}*}{\mathrm{e}_{n}(u_{n})}$

)),

wehave

$\forall u^{*(n)}\in\Omega_{n}$, $||$$1\mathrm{j}n11-u’ 1(n)$ $<||un-u" n)$$||$.

(b) (Boundedness, Asymptoticoptimality) Suppose

$\exists N_{0}\in \mathrm{N}s.t$. $\{$

$\ominus_{n}*=0,$ $in\geq N_{0}$ and

$\Omega:=\bigcap_{n\geq N_{0}}\Omega_{n}\neq\emptyset$. (3)

Then $(u_{n})_{n\in \mathrm{N}}$ is bounded. Moreover if

we

specially

use

$\forall\lambda_{n}\in[\epsilon_{1},2-\epsilon_{2}]\subset(0,2)$,

$\underline{we}$

have$n \lim_{arrow\infty}\Theta_{n}(u_{n})=0$ provided that $(\Theta_{n}’(u_{n}))_{n\in \mathrm{N}}$ is bounded.

(5)

92

(c) (Strong convergence)Assume(3) and$\Omega$hassomerelative interior$w.r.t$. a hyperplane

$\Pi(\subset H)$, $i.e.$, there exist$ii$ $\in\Pi\cap\Omega$ alldEe $>0$ satisfying

{

$v\in$ II $|||v-\tilde{u}||\leq\epsilon$

}

$\subset\Omega$.

Then, by using$\forall\lambda_{n}\in[\epsilon_{1},2-\epsilon_{2}]\subset$ $(0, 2)$, $(u_{n})_{n\in \mathrm{N}}$ convergesstronglyto

some

$ii$$\in K,$

$\mathrm{i}.\mathrm{e}.,\lim_{narrow\infty}||u_{n}-\hat{u}||=0.$ Moreover$\lim_{narrow\varpi}\Theta_{n}(\hat{\prime u})=0$ if(i) $(\Theta_{n}’(u_{n}))_{n\in \mathrm{N}}$ is bounded and

(ii) there exists bounded $(\Theta_{n}’(\hat{u}))_{n\in \mathrm{N}}$ where $\Theta_{n}’(\hat{u})\in\partial\ominus_{n}(\hat{u}),\forall n$ $\in$N.

(d) (A characterization of $ii$) Assume the existence of

some

interior $i$ of$\Omega$, $i$

.

$e.$, there

exists $\rho>0$ satisfying $\{v\in 11|||v-\tilde{u}||\leq\rho\}\subset\Omega$. In addition to the conditions (i)

and (ii) in (c), assume that thereexists $\delta>0$ satisfying

$in\geq N_{0}$, lu $\in\Gamma\backslash (1\mathrm{e}\mathrm{v}_{\leq 0}\Theta_{n}),\exists\Theta_{n}’(u)\in\partial\Theta_{n}(u),||\mathrm{e}$

;

$(u)||\geq\delta$,

where$\Gamma:=$

{

(1-s)ii+s\^u\in Pt $|s\in(0,1)$

}.

Then, by$using\forall\lambda_{n}\in[\epsilon_{1},2-\epsilon_{2}]\subset(0,2)$,

$\lim_{narrow\infty}u_{n}=:\hat{u}\in\overline{\lim_{narrow}\inf_{\infty}\Omega_{n}}$, where $\overline{\lim_{narrow}\inf_{\infty}\Omega_{n}}$ stands for the closure of $\lim_{narrow}\inf_{\infty}$$\Omega_{n}:=$

$\bigcup_{n\geq 0}\bigcap_{k\geq n}\Omega_{k}$. $\square$

4

Concluding

Remarks

In this paper, webriefly present central results on the hybrid steepest descent method and

the adaptive projected subgradient method recently developed by our research group. For

detailed mathematical discussions of the methods arrd their applications to inverse

prob-lems and signal processing problems, see [3-10,16,17,21-23, 30] and references therein.

References

[1] A.A. Goldstein, Convex programming in Hilbert space, Bull. Amer. Math. Soc. 70

(1964)

709-710.

[2] E.S. Levitin and B.T. Polyak, Constrained Minimization Method, USSR

Computa-tional Mathematics and Mathematical Physics 6 (1966)

1-50.

[3] I. Yamada, N. Ogura, Y. Yamashita and K. Sakaniwa, Quadratic optimization of

fixedpointsofnonexpansive mappings in Hilbertspace, Numer. Funct. Anal. Optim.

19 (1998) 165-190 (see also Technical Report

of

IEICE,

DSP96-106

(1996) 63-70).

[4] F.Deutschand I. Yamada, Minimizing certainconvex functions over theintersection

of the fixed point sets ofnonexpansive mappings, Numer. Funct. Anal. Optim. 19

(1998) 33-56.

[5] I. Yamada, Convex projection algorithm from POCS to Hybrid steepest descent

method (in Japanese), Journal

of

the IEICE, 83 (2000) 616-623.

[6] I. Yamada, The hybrid steepest descent method for the variational inequality

prob-lem

over

the intersection offixedpoint setsofnonexpansive mappings, in Inherently

Parallel Algorithm

for

Feasibility and Optimization and Their Applications, (D.

But-nariu, Y. Censor, and

S.

Reich, Eds.) Elsevier, (2001)

473-504.

[7] N. Ogura and I. Yamada, Non-strictly

convex

minimization

over

the fixed point set

of the asymptotically shrinkingnonexpansive mapping, Numer. Funct. Anal. Optim.

23 (2002) 113-137.

[8] N. Ogura and I. Yamada, Non-strictly

convex

minimizationover the bounded fixed

(6)

83

[9] I. Yamada, N. Ogura and N. Shirakawa, A numerically robust hybrid steepest

de-scent method for the convexly constrained generalized inverse problems, in Inverse

Problems, Image Analysis, andMedical Imaging, (Z. Nashed and O. Scherzer, Eds.)

Contemporary Mathematics, 313 Amcr. Math. Soc., (2002) 269-305.

[10] I. Yamada and N. Ogura, Hybrid steepest descent method for the variational

in-equality problem over the fixed point sets ofcertain quasi-nonexpansive mappings,

Victoria International

Conference

2004, Wellington, (to be presented) Feb., (2004).

[11] H.K. Xu and T.H. Kim, Convergence ofhybrid steepest descent methods for

varia-tional inequalities, Journal

of

Optimization Theorry and Applications, vo1.119, no.1,

(2003) 185-201.

[12] B. Halpern, Fixedpoints of nonexpanding maps, Bull Amer. Math. Soc. 73 (1967)

957-961.

[13] P.L. Lions, Approximation de points fixes de contractions, C. R. Acad. Sci. Paris

S\‘erie A-B 284 (1977)

1357-1359.

[14] R.Wittmann, Approximation offixedpointsofnonexpansivemappings, Arch. Math.

58 (1992) 486-491.

[15] H.H. Bauschke, The approximation offixedpoints of compositions ofnonexpansive

mappings in Hilbert space, J. Math. Anal Appi 202 (1996) 150-159.

[16] I. Yamada, Approximation ofconvexlyconstrained pseudoinverse by Hybrid Steepest

Descent Method, Proc.

of

IEEE

ISCAS

\prime gg, Florida, May (1999).

[17] K. Slavakis, I.YamadaandK. Sakaniwa, Computation of symmetricpositivedefinite

Toeplitz matrices by the Hybrid Steepest Descent Method, Signal Processing 83

(2003)

1135-1140.

[18] C. Byrn, A unified treatment of some iterative algorithms in signal processing and

image reconstruction, Inverse Problems, 20 (2004) 103-120.

[19] H.H. Bauschke and P.L. Combettes, A weak-t0-strong convergence principle for

Fej\’er-monotone methods in Hilbert space, Math. Oper. Res., 26 (2001) 248-264.

[20] B. T. Polyak, “Minimizationofunsmooth functionals,” USSR Co omput. Math. Phys.

v01.9, (1969) 14-29.

[21] I. Yamada, Adaptive projected subgradient method –A unified view ofprojection

based adaptive algorithms (in Japanese), Journal

of

the IEICE, 86 (2003)

654-658.

[22] I. Yamada and N. Ogura, Adaptive projected subgradient method and its

applica-tions to set theoretic adaptive filtering, Proc.

of

the 37th Asilomar

Conference

on

Signals, Systems and Computers, California, Nov., (2003).

[23] I. Yamada, N. Ogura and M. Yukawa, Adaptive projected subgradient method and

its acceleration techniques,

Proc.

of

IFAC

Workshop

on

Adaptation and Learning in

Control and Signal Processing (ALCOSP 04) Yokohama, (to be presented) August, (2004).

[24] J. Nagumo and J. Noda, “Alearning methodforsystem identification,” IEEE Trans.

(7)

84

[25]

A.

E. Albert and L. S. Gardner, Jr Stochastic approximation and nonlinear regres-sion MIT Press,

1967.

[26] T. Hinamoto and S. Maekawa, Extended theory of learning identification (in

Japanese), Trans. IEE Japan, $\mathrm{v}\mathrm{o}\mathrm{l}.95- \mathrm{C}$, no.10, (1975) 227-234.

[27] K. Ozeki and T. Umeda, Anadaptivefiltering algorithm using anorthogonal

projec-tion to an affinesubspace and its properties (in Japanese), IEICE Trans., $\mathrm{v}\mathrm{o}\mathrm{l}.67- \mathrm{A}$,

n0.5, (1984) 126-132.

[28] S. Gollamudi, S. Nagaraj, S. Kapoor and Y. H. Huang, Set-membership filtering

and aset-membership normalizedLMS algorithm with an adaptive step size, IEEE

SignalProcessing Lett, vo1.5, n0.5, (1998) 111-114.

[29] L. Guo, A. Ekpenyong and Y. H. Huang, Frequency-domain adaptive filtering –

A set-membership approach, Proc.

of

the 37th Asilomar

Conference

on

Signals,

Systems and Computers, California, Nov., (2003).

[30] I. Yamada, K. Slavakis and K. Yamada, An efficient robust adaptive filtering

alg0-rithm based on parallel subgradient projection techniques, IEEE Trans. on Signal

Processing, vo1.50, n0.5, (2002) 1091-1101.

[31] Y. Censor andS.A Zenios, Parallel Optimization: Theory, Algorithm, and

Optimiza-tion (Oxford University Press, 1997).

[32] H.H. Bauschke and J.M. Borwein, On projection algorithms for solving convex

fea-sibility problems, SIAM Review 38 (1996)

367-426.

[33] I.

Ekeland

and R. Themam,

Convex

Analysis and Variational Problems ,

Classics

in Applied Mathematics 28 (SIAM, 1999).

[34] Y. Censor, A.N. Iusem and S.A. Zenios, An interior point method with Bregman

functionsfor thevariationalinequality problemwithparamonotone operators, Math. Programming 81 (1998)

373-400.

[35] D. Butnariu and A.N. Iusem, Totally Convex Functions

for

fixed

point computation

and

infinite

dimensional optimization (Kluwer Academic Publishers, 2000).

[36] K. Goebel and S. Reich,

Uniform

Convexity, Hyperbolic Geometry, and

Nonexpan-sive Mappings (Dekker, New York and Basel, 1984).

[37] W. Takahashi, Nonlinear Functional Analysis– Fixed Point Theorry and its

Appli-cations (YokohamaPublishers, 2000).

[38] E. Zeidler, Nonlinear Functional Analysis and its Applications, $II/B$ . Nonlinear

Figure 1: Subgradient projection relative to (!)

参照

関連したドキュメント

In this paper, we we have illustrated how the modified recursive schemes 2.15 and 2.27 can be used to solve a class of doubly singular two-point boundary value problems 1.1 with Types

A new method is suggested for obtaining the exact and numerical solutions of the initial-boundary value problem for a nonlinear parabolic type equation in the domain with the

For a given complex square matrix A, we develop, implement and test a fast geometric algorithm to find a unit vector that generates a given point in the complex plane if this point

We introduce a new hybrid extragradient viscosity approximation method for finding the common element of the set of equilibrium problems, the set of solutions of fixed points of

8, and Peng and Yao 9, 10 introduced some iterative schemes for finding a common element of the set of solutions of the mixed equilibrium problem 1.4 and the set of common fixed

To derive a weak formulation of (1.1)–(1.8), we first assume that the functions v, p, θ and c are a classical solution of our problem. 33]) and substitute the Neumann boundary

Let F be a simple smooth closed curve and denote its exterior by Aco.. From here our plan is to approximate the solution of the problem P using the finite element method. The

Let F be a simple smooth closed curve and denote its exterior by Aco.. From here our plan is to approximate the solution of the problem P using the finite element method. The