Volume 2009, Article ID 957407,47pages doi:10.1155/2009/957407
Review Article
Super-Relaxed (η)-Proximal Point Algorithms, Relaxed (η)-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear
Variational Inclusions
Ravi P. Agarwal
1, 2and Ram U. Verma
1, 31Department of Mathematical Sciences, Florida Institute of Technology, Melbourne, FL 32901, USA
2Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
3International Publications (USA), 12085 Lake Cypress Circle, Suite I109, Orlando, FL 32828, USA
Correspondence should be addressed to Ravi P. Agarwal,agarwal@fit.edu Received 26 June 2009; Accepted 30 August 2009
Recommended by Lai Jiu Lin
We glance at recent advances to the general theory of maximalset-valuedmonotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed η- proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximalη-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar 1976, while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar 1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas 1992. Even for the linear convergence analysis for the overrelaxed or super-relaxed η-proximal point algorithm, the fundamental model for Rockafellar’s case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximalη-monotonicity, and then applying to first- order evolution equations/inclusions.
Copyrightq2009 R. P. Agarwal and R. U. Verma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction and Preliminaries
We begin with a real Hilbert spaceX with the norm · and the inner product ·,·. We consider the general variational inclusion problem of the following form. Find a solution to
0∈Mx, 1.1
whereM:X → 2Xis a set-valued mapping onX.
In the first part, Rockafellar 1 introduced the proximal point algorithm, and examined the general convergence and rate of convergence analysis, while solving1.1by showing whenMis maximal monotone, that the sequence{xk}generated for an initial point x0by
xk1≈Pk
xk
1.2
converges weakly to a solution of1.1, provided that the approximation is made sufficiently accurate as the iteration proceeds, wherePk IckM−1for a sequence{ck}of positive real numbers that is bounded away from zero, and in second part using the first part and further amending the proximal point algorithm succeeded in achieving the linear convergence. It follows from1.2thatxk1is an approximate solution to inclusion problem
0∈Mx c−1k x−xk
. 1.3
As a matter of fact, Rockafellar did demonstrate the weak convergence and strong convergence separately in two theorems, but for the strong convergence a further imposition of the Lipschitz continuity ofM−1at 0 plays the crucial part. Let us recall these results.
Theorem 1.1see1. LetXbe a real Hilbert space. LetM:X → 2Xbe maximal monotone, and letx∗be a zero ofM. Let the sequence{xk}be generated by the iterative procedure
xk1≈JcMk xk
1.4
such that
xk1−JcMk
xk≤k, 1.5
where JcMk I ckM−1,∞
k0k < ∞, and{ck}is bounded away from zero. Suppose that the sequence{xk}is bounded in the sense that there exists at least one solution to 0∈Mx.
Then the sequence{xk}converges weakly tox∗for 0∈Mx∗with
k→ ∞lim Qk
xk0 forQkI−JcMk. 1.6 Remark 1.2. Note that Rockafellar1inTheorem 1.1, pointed out by a counterexample that the condition
∞ k0
k<∞ 1.7
is crucial; otherwise we may end up getting a nonconvergent sequence even with having just k → 0 andXone dimensional. Consider any maximal monotone mappingMsuch that the
setT−10 {x: 0∈Mx}, that is known always to be convex and contains more than one element. Then it turns out thatT−1xcontains a nonconvergent sequence{xk}such that
xk1−xk−→0, 1.8
while
∞ k0
xk1−xk∞. 1.9
The situation changes when M ∂f if the convex function f attains its minimum non- uniquely.
Next we look, unlikeTheorem 1.1, at 1, Theorem 2in which Rockafellar achieved a linear convergence of the sequence by considering the Lipschitz continuity of M−1 at 0 instead.
Theorem 1.3see1. LetXbe a real Hilbert space. LetM:X → 2Xbe maximal monotone, and letx∗be a zero ofM. Let the sequence{xk}be generated by the iterative procedure
xk1≈JcMk xk
1.10
such that
xk1−JcMk
xk≤δkxk1−xk, 1.11 where JcMk IckM−1,∞
k0δk < ∞, and {ck} is bounded away from zero. Suppose that the sequence{xk}is bounded in the sense that there exists at least one solution to 0∈Mx. In addition, letM−1be Lipschitz continuous at 0 with modulusa, and
μk a
a2ck21/2 <1. 1.12
Then the sequence{xk}converges linearly tox∗for 0∈Mx∗with
xk1−x∗≤θkxk−x∗ ∀k≥k, 1.13
where
0≤θk μkδk
1−δk <1. 1.14
Later on Rockafellar1applied Theorem 1.1 to a minimization problem regarding functionf :X → −∞,∞, wherefis lower semicontinuous convex and proper by taking M∂f. It is well known that in this situation∂fis maximal monotone, and further
w∈∂fx⇐⇒f x
≥fx x−x, w ∀x 1.15
or
⇐⇒x∈arg minf− ·, w. 1.16
As a specialization, we have
0∈∂fx⇐⇒x∈arg minf. 1.17
That means, the proximal point algorithm forM∂fis a minimizing method forf.
There is an abundance of literature on proximal point algorithms with applications mostly followed by the work of Rockafellar1, but we focus greatly on the work of Eckstein and Bertsekas 2, where they have relaxed the proximal point algorithm in the following form and applied to the Douglas-Rachford splitting method. Now let us have a look at the relaxed proximal point algorithm introduced and studied in2.
Algorithm 1.4. Let M : X → 2X be a set-valued maximal monotone mapping on X with 0∈rangeM, and let the sequence{xk}be generated by the iterative procedure
xk1 1−αkxkαkwk ∀k≥0, 1.18 wherewkis such that
wk−IckM−1
xk≤k ∀k≥0, {k},{αk}{ck} ⊆0,∞
1.19
are scalar sequences.
As a matter of fact, Eckstein and Bertsekas2appliedAlgorithm 1.4to approximate a weak solution to 1.1. In other words, they established Theorem 1.1 using the relaxed proximal point algorithm instead.
Theorem 1.5see2, Theorem 3. LetM:X → 2Xbe a set-valued maximal monotone mapping onX with 0 ∈ rangeM, and let the sequence{xk} be generated by Algorithm 1.4. If the scalar sequences{k},{αk},and{ck}satisfy
E1 Σ∞k0k<∞, Δ1infαk>0, Δ2supαk<2, cinfck>0, 1.20 then the sequence{xk}converges weakly to a zero ofM.
Convergence analysis for Algorithm 1.4 is achieved using the notion of the firm nonexpansiveness of the resolvent operator I ckM−1. Somehow, they have not considered applying Algorithm 1.4 to Theorem 1.3 to the case of the linear convergence.
The nonexpansiveness of the resolvent operatorI ckM−1 poses the prime difficulty to algorithmic convergence, and may be, this could have been the real steering for Rockafellar to the Lipschitz continuity ofM−1instead. That is why the Yosida approximation turned out to be more effective in this scenario, because the Yosida approximation
Mck c−1k I−IckM−1
1.21
takes care of the Lipschitz continuity issue.
As we look back into the literature, general maximal monotonicity has played a greater role to studying convex programming as well as variational inequalities/inclusions. Later it turned out that one of the most fundamental algorithms applied to solve these problems was the proximal point algorithm. In2, Eckstein and Bertsekas have shown that much of the theory of the relaxed proximal point algorithm and related algorithms can be passed along to the Douglas-Rachford splitting method and its specializations, for instance, the alternating direction method of multipliers.
Just recently, Verma3generalized the relaxed proximal point algorithm and applied to the approximation solvability of variational inclusion problems of the form1.1. Recently, a great deal of research on the solvability of inclusion problems is carried out using resolvent operator techniques, that have applications to other problems such as equilibria problems in economics, optimization and control theory, operations research, and mathematical programming.
In this survey, we first discuss in detail the history of proximal point algorithms with their applications to general nonlinear variational inclusion problems, and then we recall some significant developments, especially the relaxation of proximal point algorithms with applications to the Douglas-Rachford splitting method. At the second stage, we turn our attention to over-relaxed proximal point algorithms and their contribution to the linear convergence. We start with some introductory materials to the over-relaxed η- proximal point algorithm based on the notion of maximal η-monotonicity, and recall some investigations on approximation solvability of a general class of nonlinear inclusion problems involving maximalη-monotone mappings in a Hilbert space setting. As a matter fact, we examine the convergence analysis of the over-relaxedη-proximal point algorithm for solving a class of nonlinear inclusions. Also, several results on the generalized firm nonexpansiveness and generalized resolvent mapping are given. Furthermore, we explore the real impact of recently obtained results on the celebrated work of Rockafellar, most importantly in the case of over-relaxedor super-relaxed proximal point algorithms. For more details, we refer the reader1–55.
We note that the solution set for1.1turns out to be the same as of the Yosida inclusion
0∈Mρ, 1.22
whereMρ MI ρM−1 is the Yosida regularization ofM, while there is an equivalent form ρ−1I − I ρM−1, that is characterized as the Yosida approximation of M with
parameterρ >0. It seems in certain ways that it is easier to solve the Yosida inclusion than 1.1. In other words,Mρprovides better solvability conditions under right choice forρthan Mitself. To prove this assertion, let us recall the following existence theorem.
Theorem 1.6. Let M : X → 2X be a set-valued maximal monotone mapping on X. Then the following statements are equivalent.
iAn elementu∈Xis a solution to 0∈Mρu.
iiu IρM−1u.
Assume thatuis a solution to 0∈Mρu MIρM−1. Then we have
0∈M
IρM−1 u ⇒0∈ρM
IρM−1 u ⇒
IρM−1 u∈
IρM
IρM−1 u ⇒u
IρM−1 u.
1.23
On the other hand, Mρ has also been applied to first-order evolution equa- tions/inclusions in Hilbert space as well as in Banach space settings. As in our present situation, resolvent operator I ρM−1 is empowered by η-maximal monotonicity, the Yosida approximation can be generalized in the context of solving first-order evolution equations/inclusions. In Zeidler52, Lemma 31.7, it is shown that the Yosida approximation Mρis2ρ−1-Lipschitz continuous, that is,
Mρx−Mρ
y≤ 2
ρx−y ∀x, y∈DM, 1.24 where this inequality is based on the nonexpansiveness of the resolvent operatorRMρ I ρM−1, though the result does not seem to be much application oriented, while if we apply the firm nonexpansiveness of the resolvent operatorRMρ I ρM−1, we can achieve, as applied in5, more application-oriented results as follows:
x−y, Mρx−Mρ
y
≥ρMρx−Mρ
y2, Mρx−Mρ
y≤ 1
ρx−y ∀x, y∈DM,
1.25
where the Lipschitz constant is 1/ρ.
Proof. For anyx, y∈DM, we have x−yρ
Mρx−Mρ
y
RMρ x−RMρ y
. 1.26
Based on this equality and the firm nonexpansiveness ofRMρ , we derive x−y, Mρx−Mρ
y
ρ
Mρx−Mρ
y
RMρ x−RMρ y
, Mρx−Mρ
y
ρMρx−Mρy2
RMρ x−RMρ y
, Mρx−Mρ
y
ρMρx−Mρy2 1 ρ
RMρ x−RMρ y
, x−y−
RMρ x−RMρ
y
≥ρMρx−Mρy2− 1 ρ
RMρ x−RMρ y21 ρ
RMρ x−RMρ y2 ρMρx−Mρy2.
1.27
Thus, we have
x−y, Mρx−Mρ
y
≥ρMρx−Mρy2. 1.28 This completes the proof.
We note that from applications’ point of view, it seems that the result x−y, Mρx−Mρ
y
≥ρMρx−Mρy2, 1.29 that is,Mρisρ-cocoercive, is relatively more useful than that of the nonexpansive form
Mρx−Mρ
y≤ 1
ρx−y ∀x, y∈DM. 1.30 It is well known when M is maximal monotone, the resolvent operator RMρ I ρM−1 is single valued and Lipschitz continuous globally with the best constantρ−1. Furthermore, the inverse resolvent identity is satisfied
I−
IρM−1
I
ρM−1−1
. 1.31
Indeed, the Yosida approximationMρ ρ−1I−IρM−1and its equivalent formMI ρM−1are related to this identity. Let us consider
I−
IρM−1
I
ρM−1−1
. 1.32
Suppose thatu∈I−IρM−1w, then we have
u∈ I−
IρM−1 w
⇐⇒u∈w−
IρM−1 w
⇐⇒w−u∈
IρM−1 w
⇐⇒w∈w−uρMw−u
⇐⇒u∈ρMw−u
⇐⇒w−u∈ ρM−1
u
⇐⇒w∈ I
ρM−1 u
⇐⇒u∈ I
ρM−1−1 w.
1.33
On the other hand, we have the inverse resolvent identity that lays the foundation of the Yosida approximation.
Lemma 1.7see26, Lemma 12.14. All mappingsM:X → 2Xsatisfy
ρIM−1−1
ρ−1 I−
IρM−1
forρ >0. 1.34
Proof. We include the proof, though its similar to that of the above identity. Assume that u∈ρ−1I−IρM−1w,then we have
ρu∈ I−
IρM−1 w
⇐⇒ρu∈w−
IρM−1 w
⇐⇒w−ρu∈
IρM−1 w
⇐⇒w∈w−ρuρM
w−ρu
⇐⇒u∈M
w−ρu
⇐⇒w−ρu∈M−1u
⇐⇒w∈
ρIM−1 u
⇐⇒u∈
ρIM−1−1 w,
1.35
which is the required assertion.
Note that whenM:X → 2Xis maximal monotone, mappings
I−
IρM−1
,
I
ρM−1−1
1.36
are single valued, in fact maximal monotone and nonexpansive.
The contents for the paper are organized as follows.Section 1 deals with a general historical development of the relaxed proximal point algorithm and its variants in conjunction with maximal η-monotonicity, and with the approximation solvability of a class of nonlinear inclusion problems using the convergence analysis for the proximal point algorithm as well as for the relaxed proximal point algorithm. Section 2 introduces and derives some results on unifying maximal η-monotonicity and generalized firm nonexpansiveness of the generalized resolvent operator. In Section 3, the role of the over- relaxedη-proximal point algorithm is examined in detail in terms of its applications to approximating the solution of the inclusion problem 1.1. Finally, Section 4 deals with some important specializations that connect the results on general maximal monotonicity, especially to several aspects of the linear convergence.
2. General Maximal η -Monotonicity
In this section we discus some results based on basic properties of maximalη-monotonicity, and then we derive some results involving η-monotonicity and the generalized firm nonexpansiveness. LetXdenote a real Hilbert space with the norm · and inner product
·,·. Let M : X → 2X be a multivalued mapping on X. We will denote both the map M and its graph by M, that is, the set {x, y : y ∈ Mx}. This is equivalent to stating that a mapping is any subset M of X × X, and Mx {y : x, y ∈ M}. If M is single valued, we will still use Mx to represent the unique y such that x, y ∈ M rather than the singleton set {y}. This interpretation will much depend on the context. The domain of a map Mis defined as its projection onto the first argument by
domM
x∈X:∃y∈X: x, y
∈M
{x∈X:Mx/∅}. 2.1
domT Xwill denote the full domain ofM, and the range ofMis defined by
rangeM
y∈X :∃x∈X: x, y
∈M
. 2.2
The inverseM−1 ofMis{y, x : x, y ∈ M}. For a real numberρand a mappingM, let ρM{x, ρy:x, y∈M}. IfLandMare any mappings, we define
LM
x, yz :
x, y
∈L, x, z∈M
. 2.3
Definition 2.1. LetM:X → 2Xbe a multivalued mapping onX. The mapMis said to be imonotone if
u∗−v∗, u−v ≥0 ∀u, u∗,v, v∗∈graphM, 2.4
ii r-strongly monotone if there exists a positive constantrsuch that
u∗−v∗, u−v ≥ru−v2 ∀u, u∗,v, v∗∈graphM, 2.5
iiistrongly monotone if
u∗−v∗, u−v ≥ u−v2 ∀u, u∗,v, v∗∈graphM, 2.6
iv r-strongly pseudomonotone if
v∗, u−v ≥0 2.7
implies
u∗, u−v ≥ru−v2 ∀u, u∗,v, v∗∈graphM, 2.8
vpseudomonotone if
v∗, u−v ≥0 2.9
implies
u∗, u−v ≥0 ∀u, u∗,v, v∗∈graphM, 2.10
vi m-relaxed monotone if there exists a positive constantmsuch that
u∗−v∗, u−v ≥−mu−v2 ∀u, u∗,v, v∗∈graphM, 2.11
viicocoercive if
u∗−v∗, u−v ≥ u∗−v∗2 ∀u, u∗,v, v∗∈graphM, 2.12
viii c-cocoercive if there is a positive constantcsuch that
u∗−v∗, u−v ≥cu∗−v∗2 ∀u, u∗,v, v∗∈graphM. 2.13
Definition 2.2. LetM:X → 2Xbe a mapping onX. The mapMis said to be inonexpansive if
u∗−v∗ ≤ u−v ∀u, u∗,v, v∗∈graphM, 2.14
iifirmly nonexpansive if
u∗−v∗2≤ u∗−v∗, u−v ∀u, u∗,v, v∗∈graphM, 2.15
iii c-firmly nonexpansive if there exists a constantc >0 such that
u∗−v∗2≤cu∗−v∗, u−v ∀u, u∗,v, v∗∈graphM. 2.16
In light of Definitions2.1viiand2.2ii, notions of cocoerciveness and firm nonex- pansiveness coincide, but differ in applications much depending on the context.
Definition 2.3. A mapη:X×X → Xis said to be imonotone if
x−y, η x, y
≥0 ∀ x, y
∈X, 2.17
ii t-strongly monotone if there exists a positive constanttsuch that x−y, η
x, y
≥tx−y2 ∀ x, y
∈X, 2.18
iiistrongly monotone if x−y, η
x, y
≥x−y2 ∀ x, y
∈X, 2.19
iv τ-Lipschitz continuous if there exists a positive constantτsuch that η
x, y≤τx−y. 2.20
Definition 2.4. LetM:X → 2X be a multivalued mapping onX, and letη :X×X → Xbe another mapping. The mapMis said to be
i η-monotone if
u∗−v∗, ηu, v
≥0 ∀u, u∗,v, v∗∈graphM, 2.21
ii r, η-strongly monotone if there exists a positive constantrsuch that u∗−v∗, ηu, v
≥ru−v2 ∀u, u∗,v, v∗∈graphM, 2.22
iii η-strongly monotone if u∗−v∗, ηu, v
≥ u−v2 ∀u, u∗,v, v∗∈graphM, 2.23
iv r, η-strongly pseudomonotone if
v∗, ηu, v ≥0 2.24
implies
u∗, ηu, v
≥ru−v2 ∀u, u∗,v, v∗∈graphM, 2.25
v η-pseudomonotone if
v∗, ηu, v ≥0 2.26
implies
u∗, ηu, v
≥0 ∀u, u∗,v, v∗∈graphM, 2.27
vi m, η-relaxed monotone if there exists a positive constantmsuch that u∗−v∗, ηu, v
≥−mu−v2 ∀u, u∗,v, v∗∈graphM, 2.28
vii c, η-cocoercive if there is a positive constantcsuch that
u∗−v∗, ηu, v ≥cu∗−v∗2 ∀u, u∗,v, v∗∈graphM. 2.29
Definition 2.5. A mapM:X → 2Xis said to be maximalη-monotone if 1Misη-monotone,
2RIcM Xforc >0.
Proposition 2.6. Letη:X×X → Xbe at-strongly monotone mapping, and letM:X → 2Xbe a maximalη-monotone mapping. ThenIcMis maximalη-monotone forc >0, whereIis the identity mapping.
Proof. The proof follows on applyingDefinition 2.5.
Proposition 2.7see4. Letη:X×X → Xbet-strongly monotone, and letM:X → 2Xbe maximalη-monotone. Then generalized resolvent operatorIcM−1is single valued, whereI is the identity mapping.
Proof. For a givenu ∈X, considerx, y ∈ IcM−1uforc > 0. SinceMis maximalη- monotone, we have
1
c−xu∈Mx, 1
c
−yu
∈M y
.
2.30
Now using theη-monotonicity ofM, it follows that
−xu−
−yu , η
x, y
y−x, η x, y
≥0. 2.31
Sinceηist-strongly monotone, it impliesxy. Thus,IcM−1is single valued.
Definition 2.8. Letη:X×X → Xbet-strongly monotone, and letM:X → 2Xbe maximal η-monotone. Then the generalized resolvent operatorJcM,η:X → Xis defined by
JcM,ηu IcM−1u forc >0. 2.32 Proposition 2.9see4. LetXbe a real Hilbert space, letM:X → 2Xbe maximalη-monotone, and letη:X×X → Xbet-strongly monotone. Then the resolvent operator associated withMand defined by
JρM,ηu
IρM−1
u ∀u∈X 2.33
satisfies the following:
u−v, η
JρM,ηu, JρM,ηv
≥tJρM,ηu−JρM,ηv2. 2.34
Proof. For anyu, v∈X, it follows from the definition of the resolvent operatorJρM,ηthat
1
ρ u−JρM,ηu
∈M
JρM,ηu ,
1
ρ v−JρM,ηv
∈M
JρM,ηv .
2.35
SinceMisη-monotone, we have
1 ρ
u−v− JρM,ηu−JρM,ηv , η
JρM,ηu, JρM,ηv
≥0. 2.36
In light of2.36, we have
u−v, η
JρM,ηu, JρM,ηv
≥
JρM,ηu−JρM,ηv, η
JρM,ηu, JρM,ηv
≥tJρM,ηu−JρM,ηv2.
2.37
Proposition 2.10 see4. LetX be a real Hilbert space, let M : X → 2X be maximal η- monotone, and letη:X×X → Xbet-strongly monotone.
If, in addition, (forγ >0)
u−v, JρM,ηu−JρM,ηv
≥γ
u−v, η
JρM,ηu, JρM,ηv
∀u, v∈X, 2.38
then, forJk∗I−JρM,η, one has (fort≥1) u−v, Jk∗u−Jk∗v
≥ γt−1
2γt−1u−v2 γt
2γt−1Jk∗u−Jk∗v2, 2.39 where
JρM,ηu
IρM−1
u ∀u∈X. 2.40
Proof. We include the proof for the sake of the completeness. To prove2.39, we apply2.38 toProposition 2.9, and we get
u−v, JρM,ηu−JρM,ηv
≥γtJρM,ηu−JρM,ηv2. 2.41 It further follows that
u−v, u−v−
Jk∗u−Jk∗v
≥γt Jk∗u−Jk∗v2u−v2−2
Jk∗u−Jk∗v, u−v
. 2.42
Whenγ1 andt >1 inProposition 2.10, we have the following.
Proposition 2.11. LetXbe a real Hilbert space, letM:X → 2Xbe maximalη-monotone, and let η:X×X → Xbet-strongly monotone.
If, in addition, one supposes that
u−v, JρM,ηu−JρM,ηv
≥
u−v, η
JρM,ηu, JρM,ηv
∀u, v∈X, 2.43
then, forJk∗I−JρM,η, one has (fort >1) u−v, Jk∗u−Jk∗v
≥ t−1
2t−1u−v2 t
2t−1Jk∗u−Jk∗v2, 2.44 where
JρM,ηu
IρM−1
u ∀u∈X. 2.45 Fort1 andγ >1 inProposition 2.10, we find a result of interest as follows.
Proposition 2.12. LetXbe a real Hilbert space, letM:X → 2Xbe maximalη-monotone, and let η:X×X → Xbe strongly monotone.
If, in addition, one supposes (forγ >1) that
u−v, JρM,ηu−JρM,ηv
≥γ
u−v, η
JρM,ηu, JρM,ηv
∀u, v∈X, 2.46
then, forJk∗I−JρM,η, one has u−v, Jk∗u−Jk∗v
≥ γ−1
2γ−1u−v2 γ
2γ−1Jk∗u−Jk∗v2, 2.47 where
JρM,ηu
IρM−1
u ∀u∈X. 2.48 Forγt1 inProposition 2.10, we have the following result.
Proposition 2.13. LetXbe a real Hilbert space, letM:X → 2Xbe maximalη-monotone, and let η:X×X → Xbe strongly monotone.
If, in addition, one assumes that
u−v, JρM,ηu−JρM,ηv
≥
u−v, η
JρM,ηu,JM,ηρ v
∀u, v∈X, 2.49
then, forJk∗I−JρM,η, one has
u−v, Jk∗u−Jk∗v
≥Jk∗u−Jk∗v2, 2.50 where
JρM,ηu
IρM−1
u ∀u∈X. 2.51
3. The Over-Relaxed ( η )-Proximal Point Algorithm
This section deals with the over-relaxed η-proximal point algorithm and its application to approximation solvability of the inclusion problem 1.1 based on the maximal η- monotonicity. Furthermore, some results connecting theη-monotonicity and corresponding resolvent operator are established, that generalize the results on the firm nonexpansiveness 2, while the auxiliary results on maximal η-monotonicity and general maximal mono- tonicity are obtained.
Theorem 3.1. LetXbe a real Hilbert space, and letM:X → 2Xbe maximalη-monotone. Then the following statements are mutually equivalent.
iAn elementu∈Xis a solution to1.1.
iiFor anu∈X, one has
uJcM,ηu forc >0, 3.1
where
JcM,ηu IcM−1u. 3.2
Proof. It follows from the definition of the generalized resolvent operator corresponding toM.
Note thatTheorem 3.1generalizes2, Lemma 2to the case of a maximalη-mono- tone mapping.
Next, we present a generalization to the relaxed proximal point algorithm3based on the maximalη-monotonicity.
Algorithm 3.2see4. LetM:X → 2Xbe a set-valued maximalη-monotone mapping on Xwith 0∈rangeM, and let the sequence{xk}be generated by the iterative procedure
xk1 1−αkxkαkyk ∀k≥0, 3.3
andyksatisfies
yk−JcM,ηk
xk≤δkyk−xk, 3.4
whereJcM,ηk IckM−1,δk → 0 and
yk1 1−αkxkαkJcM,ηk
xk
∀k≥0. 3.5
Here
{δk},{αk},{ck} ⊆0,∞ 3.6
are scalar sequences such that∞
k0δk<∞.
Algorithm 3.3. LetM:X → 2X be a set-valued maximalη-monotone mapping onXwith 0∈rangeM, and let the sequence{xk}be generated by the iterative procedure
xk1
1−αk−βk
xkαkyk ∀k≥0, 3.7
andyksatisfies
yk−JcMk
xk≤k, 3.8
whereJcM,ηk IckM−1, and
{k},{αk}, βk
,{ck} ⊆0,∞ 3.9
are scalar sequences such that∞
k0k<∞.
Forδk1/k2inAlgorithm 3.2, we have the following.
Algorithm 3.4. LetM:X → 2X be a set-valued maximalη-monotone mapping onXwith 0∈rangeM, and let the sequence{xk}be generated by the iterative procedure
xk1 1−αkxkαkyk ∀k≥0, 3.10 andyksatisfies
yk−JcM,ηk
xk≤ 1 k2
yk−xk, 3.11
whereJcM,ηk IckM−1, and
yk1 1−αkxkαkJcM,ηk
xk
∀k≥0. 3.12
Here
{αk},{ck} ⊆0,∞ 3.13 are scalar sequences.
In the following result4, we observe that Theorems1.1and1.3are unified and are generalized to the case of the η-maximal monotonicity and super-relaxed proximal point algorithm. Also, we notice that this result in certain respects demonstrates the importance of the firm nonexpansiveness rather than of the nonexpansiveness.
Theorem 3.5see4. LetXbe a real Hilbert space. LetM:X → 2Xbe maximalη-monotone, and letx∗be a zero ofM. Letη:X×X → Xbet-strongly monotone. Furthermore, assume (for γ >0)
u−v, JcM,ηk u−JcM,ηk v
≥γ
u−v, η
JcM,ηk u, JcM,ηk v
∀u, v∈X. 3.14
Let the sequence{xk}be generated by the iterative procedure
xk1 1−αkxkαkyk ∀k≥0, 3.15 andyksatisfies
yk−JcM,ηk
xk≤ek, 3.16
whereJcM,ηk IckM−1,∞
k0ek<∞,{αk},{ek}{ck} ⊆0,∞,ckc∗≤ ∞, infk≥0αk>0, and supk≥0αk<2γt/2γt−1.
Suppose that the sequence{xk}is bounded in the sense that there exists at least one solution to 0∈Mx.
Then one has (fort≥1)
2γt−1JcM,ηk xk−x∗2≤xk−x∗2−Jk∗xk2, 3.17
whereγt >1 and
Jk∗I−JcM,ηk . 3.18
In addition, suppose that the sequence{xk}is generated by Algorithm 3.2as well, and that M−1 is a-Lipschitz continuous at 0, that is, there exists a unique solution z∗ to 0 ∈ Mz (equivalently,M−10 {z∗}) and for constantsa≥0 andb >0, one has
z−z∗ ≤aw wheneverz∈M−1w, w ≤b. 3.19
Here
{δk},{αk},{ck} ⊆0,∞ 3.20
are scalar sequences such thatδk → 0 and∞
k0δk<∞.
Then the sequence{xk}converges linearly to a unique solutionx∗with rate
1−α∗ 2
1−γtd2
−α∗ 1−
2γt−1 d2
<1 fort≥1, 3.21
where d
a2/c∗2 2γt−1a2,α∗ lim supk→ ∞αk, and sequences {αk} and {ck} satisfy αk≥1,ckc∗≤ ∞, infk≥0αk>0, and supk≥0αk<2γt/2γt−1.
Proof. Suppose thatx∗is a zero ofM. For allk≥0, we set
Jk∗I−JcM,ηk . 3.22
Therefore,Jk∗x∗ 0. Then, in light ofTheorem 3.1, any solution to1.1is a fixed point of JcM,ηk , and hence a zero ofJk∗.
Next, the proof of 3.17 follows from a regular manipulation, and the following equality:
u−v2JcM,ηk u−JcM,ηk v Jk∗u−Jk∗v2 ∀u, v∈X. 3.23
Before we start establishing linear convergence of the sequence{xk}, we express{xk}in light ofAlgorithm 3.2as
yk1 1−αkxkαkJcM,ηk
xk
I−αkJk∗ xk
.
3.24
Now we begin verifying the boundedness of the sequence {xk} leading to xk − JcM,ηk xk → 0.
Next, we estimate usingProposition 2.10fort≥1 yk1−x∗21−αkxkαkJcM,ηk
xk
−x∗2 xk−x∗−αkJk∗xk2
xk−x∗2−2αk
xk−x∗, Jk∗ xk
−Jk∗x∗
α2kJk∗xk2
≤xk−x∗2−2 γt−1
αk
2γt−1
xk−x∗2− 2γt
2γt−1αkJk∗xk2α2kJk∗xk2
1−2
γt−1 αk
2γt−1
xk−x∗2−αk
2γt 2γt−1−αk
Jk∗xk2.
3.25
Since under the assumptionsαk2γt/2γt−1−αk>0, it follows that
yk1−x∗≤Δxk−x∗≤xk−x∗, 3.26 whereΔ
1−2γt−1αk/2γt−1<1.
Moreover,
xk1−yk11−αkxkαkyk− 1−αkxkαkJcM,ηk
xk αk
yk−JcM,ηk
xk
≤αkek.
3.27
Now we find the estimate leading to the boundedness of the sequence{xk}, xk1−x∗≤yk1−x∗xk1−yk1
≤xk−x∗αkek
≤x0−x∗k
j0
αjej
≤x0−x∗ 2γt 2γt−1
∞ k0
ek.
3.28
Thus, the sequence{xk}is bounded.
We further examine the estimate xk1−x∗2yk1−x∗xk1−yk12
yk1−x∗22
yk1−x∗, xk1−yk1
xk1−yk12
≤yk1−x∗22yk1−x∗xk1−yk1xk1−yk12
≤xk−x∗2−αk
2γt 2γt−1−αk
Jk∗xk2
2xk1−x∗xk1−yk1xk1−yk1xk1−yk12
≤xk−x∗2−αk
2γt 2γt−1−αk
Jk∗xk2 2
x0−x∗ 4γt 2γt−1
∞ k0
ek
2γt 2γt−1
∞ k0
ek 2γt
2γt−1 2∞
k0
e2k,
3.29
whereαk2γt/2γt−1−αk>0.
Since{ek}is summable, so is{e2k}, and hence∞
k0e2k<∞. Ask → ∞, we have that k
j0
Jj∗xj2<∞⇒ lim
k→ ∞Jk∗ xk
0, 3.30
that is,xk−JcM,ηk xk → 0.