Jump to content

Subgradient method: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Undid revision 482504246 by 24.91.39.79 (talk) Flatness is another issue. Schnabel and other's tensor methods are supposed to deal with singularities. Restore "flat regions" w source :)
 
(37 intermediate revisions by 26 users not shown)
Line 1: Line 1:
'''Subgradient methods''' are [[iterative method]]s for solving [[convex optimization|convex minimization]] problems. Originally developed by [[Naum Z. Shor]] and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, subgradient methods for unconstrained problems use the same search direction as the method of [[gradient descent|steepest descent]].
'''Subgradient methods''' are [[convex optimization]] methods which use [[Subderivative|subderivatives]]. Originally developed by [[Naum Z. Shor]] and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of [[gradient descent|steepest descent]].


Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.


In recent years, some [[interior-point methods]] have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.
In recent years, some [[interior-point methods]] have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.


Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
Line 9: Line 9:
==Classical subgradient rules==
==Classical subgradient rules==


Let <math>f:\mathbb{R}^n \to \mathbb{R}</math> be a [[convex function]] with domain <math>\mathbb{R}^n</math>. A classical subgradient method iterates
Let <math>f : \Reals^n \to \Reals</math> be a [[convex function]] with domain <math>\Reals^n.</math>
A classical subgradient method iterates
:<math>x^{(k+1)} = x^{(k)} - \alpha_k g^{(k)} \ </math>
<math display=block>x^{(k+1)} = x^{(k)} - \alpha_k g^{(k)} \ </math>
where <math>g^{(k)}</math> denotes a [[subgradient]] of <math> f \ </math> at <math>x^{(k)} \ </math>. If <math>f \ </math> is differentiable, then its only subgradient is the gradient vector <math>\nabla f</math> itself.
where <math>g^{(k)}</math> denotes ''any'' [[subgradient]] of <math> f \ </math> at <math>x^{(k)}, \ </math> and <math>x^{(k)}</math> is the <math>k^{th}</math> iterate of <math>x.</math>
If <math>f \ </math> is differentiable, then its only subgradient is the gradient vector <math>\nabla f</math> itself.
It may happen that <math>-g^{(k)}</math> is not a descent direction for <math>f \ </math> at <math>x^{(k)}</math>. We therefore maintain a list <math>f_{\rm{best}} \ </math> that keeps track of the lowest objective function value found so far, i.e.
It may happen that <math>-g^{(k)}</math> is not a descent direction for <math>f \ </math> at <math>x^{(k)}.</math> We therefore maintain a list <math>f_{\rm{best}} \ </math> that keeps track of the lowest objective function value found so far, i.e.
:<math>f_{\rm{best}}^{(k)} = \min\{f_{\rm{best}}^{(k-1)} , f(x^{(k)}) \}.</math>
<math display=block>f_{\rm{best}}^{(k)} = \min\{f_{\rm{best}}^{(k-1)} , f(x^{(k)}) \}.</math>


===Step size rules===
===Step size rules===
Line 20: Line 22:


*Constant step size, <math>\alpha_k = \alpha.</math>
*Constant step size, <math>\alpha_k = \alpha.</math>
*Constant step length, <math>\alpha_k = \gamma/\lVert g^{(k)} \rVert_2</math>, which gives <math>\lVert x^{(k+1)} - x^{(k)} \rVert_2 = \gamma.</math>
*Constant step length, <math>\alpha_k = \gamma/\lVert g^{(k)} \rVert_2,</math> which gives <math>\lVert x^{(k+1)} - x^{(k)} \rVert_2 = \gamma.</math>
*Square summable but not summable step size, i.e. any step sizes satisfying
*Square summable but not summable step size, i.e. any step sizes satisfying <math display="block">\alpha_k\geq0,\qquad\sum_{k=1}^\infty \alpha_k^2 < \infty,\qquad \sum_{k=1}^\infty \alpha_k = \infty.</math>
:<math>\alpha_k\geq0,\qquad\sum_{k=1}^\infty \alpha_k^2 < \infty,\qquad \sum_{k=1}^\infty \alpha_k = \infty.</math>
*Nonsummable diminishing, i.e. any step sizes satisfying <math display="block">\alpha_k \geq 0,\qquad \lim_{k\to\infty} \alpha_k = 0,\qquad \sum_{k=1}^\infty \alpha_k = \infty.</math>
*Nonsummable diminishing step lengths, i.e. <math>\alpha_k = \gamma_k/\lVert g^{(k)} \rVert_2,</math> where <math display="block">\gamma_k \geq 0,\qquad \lim_{k\to\infty} \gamma_k = 0,\qquad \sum_{k=1}^\infty \gamma_k = \infty.</math>
*Nonsummable diminishing, i.e. any step sizes satisfying
For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas<ref>{{cite book
:<math>\alpha_k\geq0,\qquad \lim_{k\to\infty} \alpha_k = 0,\qquad \sum_{k=1}^\infty \alpha_k = \infty.</math>
| last = Bertsekas
*Nonsummable diminishing step lengths, i.e. <math>\alpha_k = \gamma_k/\lVert g^{(k)} \rVert_2</math>, where
| first = Dimitri P.
:<math>\gamma_k\geq0,\qquad \lim_{k\to\infty} \gamma_k = 0,\qquad \sum_{k=1}^\infty \gamma_k = \infty.</math>
| author-link = Dimitri P. Bertsekas
For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction.
| title = Convex Optimization Algorithms
| edition = Second
| publisher = Athena Scientific
| year = 2015
| location = Belmont, MA.
| isbn = 978-1-886529-28-1
}}</ref> and by Bertsekas, Nedic, and Ozdaglar.<ref>{{cite book
|last1=Bertsekas
|first1=Dimitri P.
|last2=Nedic
|first2=Angelia
|last3 = Ozdaglar
|first3 = Asuman
| title = Convex Analysis and Optimization
| edition = Second
| publisher = Athena Scientific
| year = 2003
| location = Belmont, MA.
| isbn = 1-886529-45-0 }}</ref>


===Convergence results===
===Convergence results===
Line 36: Line 57:
| last = Bertsekas
| last = Bertsekas
| first = Dimitri P.
| first = Dimitri P.
| authorlink = Dimitri P. Bertsekas
| author-link = Dimitri P. Bertsekas
| title = Nonlinear Programming
| title = Nonlinear Programming
| edition = Second
| edition = Second
Line 46: Line 67:
| last = Shor
| last = Shor
| first = Naum Z.
| first = Naum Z.
| authorlink = Naum Z. Shor
| author-link = Naum Z. Shor
| title = Minimization Methods for Non-differentiable Functions
| title = Minimization Methods for Non-differentiable Functions
| publisher = [[Springer-Verlag]]
| publisher = [[Springer-Verlag]]
Line 53: Line 74:
}}
}}
</ref>
</ref>
These classical subgradient methods have poor performance and are no longer recommended for general use.<ref name="Lem"/><ref name="KLL"/>
These classical subgradient methods have poor performance and are no longer recommended for general use.<ref name="Lem"/><ref name="KLL"/> However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand.


==Subgradient-projection & bundle methods==
==Subgradient-projection and bundle methods==
During the 1970s, [[Claude Lemaréchal]] and Phil. Wolfe proposed "bundle methods" of descent for problems of convex minimization.<ref>
During the 1970s, [[Claude Lemaréchal]] and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization.<ref>
{{cite book
{{cite book
| last = Bertsekas
| last = Bertsekas
| first = Dimitri P.
| first = Dimitri P.
| authorlink = Dimitri P. Bertsekas
| author-link = Dimitri P. Bertsekas
| title = Nonlinear Programming
| title = Nonlinear Programming
| edition = Second
| edition = Second
Line 67: Line 88:
| location = Cambridge, MA.
| location = Cambridge, MA.
| isbn = 1-886529-00-0
| isbn = 1-886529-00-0
}}
}}
</ref> The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel.
<ref>
{{cite book|last=Kiwiel|first=Krzysztof|title=Methods of Descent for Nondifferentiable Optimization|publisher=[[Springer Verlag]]|location=Berlin|year=1985|pages=362|isbn=978-3540156420 |mr=0797754}}
</ref> Contemporary bundle-methods often use "[[level set|level]] control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.<ref name="Lem">
</ref> Contemporary bundle-methods often use "[[level set|level]] control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.<ref name="Lem">
{{cite book| last=Lemaréchal|first=Claude|authorlink=Claude Lemaréchal|chapter=Lagrangian relaxation|pages=112–156|title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000|editor=Michael Jünger and Denis Naddef|series=Lecture Notes in Computer Science|volume=2241|publisher=Springer-Verlag| location=Berlin|year=2001|isbn=3-540-42877-1|mr=1900016|doi=10.1007/3-540-45586-8_4|ref=harv}}</ref><ref name="KLL">
{{cite book| last=Lemaréchal|first=Claude|author-link=Claude Lemaréchal|chapter=Lagrangian relaxation|pages=112–156|title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000|editor=Michael Jünger and Denis Naddef|series=Lecture Notes in Computer Science|volume=2241|publisher=Springer-Verlag| location=Berlin|year=2001|isbn=3-540-42877-1|mr=1900016|doi=10.1007/3-540-45586-8_4|s2cid=9048698 }}</ref><ref name="KLL">
{{cite journal|last1=Kiwiel|first1=Krzysztof&nbsp;C.|last2=Larsson |first2=Torbjörn|last3=Lindberg|first3=P.&nbsp;O.|title=Lagrangian relaxation via ballstep subgradient methods|url=http://mor.journal.informs.org/cgi/content/abstract/32/3/669 |journal=Mathematics of Operations Research|volume=32|year=2007|number=3|pages=669–686|month=August|mr=2348241|doi=10.1287/moor.1070.0261|ref=harv}}
{{cite journal|last1=Kiwiel|first1=Krzysztof&nbsp;C.|last2=Larsson |first2=Torbjörn|last3=Lindberg|first3=P.&nbsp;O.|title=Lagrangian relaxation via ballstep subgradient methods|url=http://mor.journal.informs.org/cgi/content/abstract/32/3/669 |journal=[[Mathematics of Operations Research]]|volume=32|date=August 2007|number=3|pages=669–686|mr=2348241|doi=10.1287/moor.1070.0261}}
</ref>
</ref>


==Constrained optimization==
==Constrained optimization==
===Projected subgradient===
===Projected subgradient===
One extension of the subgradient method is the '''projected subgradient method''', which solves the constrained optimization problem
One extension of the subgradient method is the '''projected subgradient method''', which solves the constrained [[Mathematical optimization|optimization]] problem
:minimize <math>f(x) \ </math> subject to
:minimize <math>f(x) \ </math> subject to <math display=block>x \in \mathcal{C}</math>
:<math>x\in\mathcal{C}</math>

where <math>\mathcal{C}</math> is a convex set. The projected subgradient method uses the iteration

:<math>x^{(k+1)} = P \left(x^{(k)} - \alpha_k g^{(k)} \right) </math>


where <math>\mathcal{C}</math> is a [[convex set]].
The projected subgradient method uses the iteration
<math display=block>x^{(k+1)} = P \left(x^{(k)} - \alpha_k g^{(k)}\right)</math>
where <math>P</math> is projection on <math>\mathcal{C}</math> and <math>g^{(k)}</math> is any subgradient of <math>f \ </math> at <math>x^{(k)}.</math>
where <math>P</math> is projection on <math>\mathcal{C}</math> and <math>g^{(k)}</math> is any subgradient of <math>f \ </math> at <math>x^{(k)}.</math>


Line 89: Line 111:
The subgradient method can be extended to solve the inequality constrained problem
The subgradient method can be extended to solve the inequality constrained problem


:minimize <math>f_0(x) \ </math> subject to
:minimize <math>f_0(x) \ </math> subject to <math display=block>f_i (x) \leq 0,\quad i = 1,\ldots,m</math>
:<math>f_i (x) \leq 0,\quad i = 1,\dots,m</math>


where <math>f_i</math> are convex. The algorithm takes the same form as the unconstrained case
where <math>f_i</math> are convex. The algorithm takes the same form as the unconstrained case
<math display=block>x^{(k+1)} = x^{(k)} - \alpha_k g^{(k)} \ </math>

:<math>x^{(k+1)} = x^{(k)} - \alpha_k g^{(k)} \ </math>

where <math>\alpha_k>0</math> is a step size, and <math>g^{(k)}</math> is a subgradient of the objective or one of the constraint functions at <math>x. \ </math> Take
where <math>\alpha_k>0</math> is a step size, and <math>g^{(k)}</math> is a subgradient of the objective or one of the constraint functions at <math>x. \ </math> Take
<math display=block>g^{(k)} =

:<math>g^{(k)} =
\begin{cases}
\begin{cases}
\partial f_0 (x) & \text{ if } f_i(x) \leq 0 \; \forall i = 1 \dots m \\
\partial f_0 (x) & \text{ if } f_i(x) \leq 0 \; \forall i = 1 \dots m \\
\partial f_j (x) & \text{ for some } j \text{ such that } f_j(x) > 0
\partial f_j (x) & \text{ for some } j \text{ such that } f_j(x) > 0
\end{cases}</math>
\end{cases}</math>
where <math>\partial f</math> denotes the [[subdifferential]] of <math>f. \ </math> If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.


==See also==
where <math>\partial f</math> denotes the subdifferential of <math>f \ </math>. If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.

* {{annotated link|Stochastic gradient descent}}


==References==
==References==

<references/>
{{reflist}}

==Further reading==


* {{cite book
* {{cite book
| last = Bertsekas
| last = Bertsekas
| first = Dimitri P.
| first = Dimitri P.
| authorlink = Dimitri P. Bertsekas
| title = Nonlinear Programming
| title = Nonlinear Programming
| publisher = Athena Scientific
| publisher = Athena Scientific
| year = 1999
| year = 1999
| location = Cambridge, MA.
| location = Belmont, MA.
| isbn = 1-886529-00-0
| isbn = 1-886529-00-0
}}
}}
* {{cite book

|last1=Bertsekas
|first1=Dimitri P.
|last2=Nedic
|first2=Angelia
|last3 = Ozdaglar
|first3 = Asuman
| title = Convex Analysis and Optimization
| edition = Second
| publisher = Athena Scientific
| year = 2003
| location = Belmont, MA.
| isbn = 1-886529-45-0
}}
* {{cite book
| last = Bertsekas
| first = Dimitri P.
| title = Convex Optimization Algorithms
| publisher = Athena Scientific
| year = 2015
| location = Belmont, MA.
| isbn = 978-1-886529-28-1
}}
* {{cite book
* {{cite book
| last = Shor
| last = Shor
| first = Naum Z.
| first = Naum Z.
| authorlink = Naum Z. Shor
| title = Minimization Methods for Non-differentiable Functions
| title = Minimization Methods for Non-differentiable Functions
| publisher = [[Springer-Verlag]]
| publisher = [[Springer-Verlag]]
Line 129: Line 173:
| year = 1985
| year = 1985
}}
}}
* {{cite book|last=Ruszczyński|author-link=Andrzej Piotr Ruszczyński|first=Andrzej|title=Nonlinear Optimization|publisher=[[Princeton University Press]]|location=Princeton, NJ|year=2006|pages=xii+454|isbn=978-0691119151 |mr=2199043}}


==External links==
==External links==

* [http://www.stanford.edu/class/ee364a/ EE364A] and [http://www.stanford.edu/class/ee364b/ EE364B], Stanford's convex optimization course sequence.
* [http://www.stanford.edu/class/ee364a/ EE364A] and [http://www.stanford.edu/class/ee364b/ EE364B], Stanford's convex optimization course sequence.

{{Convex analysis and variational analysis}}
{{optimization algorithms|convex}}
{{optimization algorithms|convex}}


[[Category:Mathematical optimization]]
[[Category:Convex analysis]]
[[Category:Convex optimization]]
[[Category:Convex optimization]]
[[Category:Mathematical optimization]]

[[Category:Optimization algorithms and methods]]
[[zh:次梯度法]]

Latest revision as of 17:33, 1 February 2024

Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.

Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.

In recent years, some interior-point methods have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.

Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.

Classical subgradient rules

[edit]

Let be a convex function with domain A classical subgradient method iterates where denotes any subgradient of at and is the iterate of If is differentiable, then its only subgradient is the gradient vector itself. It may happen that is not a descent direction for at We therefore maintain a list that keeps track of the lowest objective function value found so far, i.e.

Step size rules

[edit]

Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergence proofs are known:

  • Constant step size,
  • Constant step length, which gives
  • Square summable but not summable step size, i.e. any step sizes satisfying
  • Nonsummable diminishing, i.e. any step sizes satisfying
  • Nonsummable diminishing step lengths, i.e. where

For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas[1] and by Bertsekas, Nedic, and Ozdaglar.[2]

Convergence results

[edit]

For constant step-length and scaled subgradients having Euclidean norm equal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is

by a result of Shor.[3]

These classical subgradient methods have poor performance and are no longer recommended for general use.[4][5] However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand.

Subgradient-projection and bundle methods

[edit]

During the 1970s, Claude Lemaréchal and Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization.[6] The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel. [7] Contemporary bundle-methods often use "level control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.[4][5]

Constrained optimization

[edit]

Projected subgradient

[edit]

One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem

minimize subject to

where is a convex set. The projected subgradient method uses the iteration where is projection on and is any subgradient of at

General constraints

[edit]

The subgradient method can be extended to solve the inequality constrained problem

minimize subject to

where are convex. The algorithm takes the same form as the unconstrained case where is a step size, and is a subgradient of the objective or one of the constraint functions at Take where denotes the subdifferential of If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.

See also

[edit]

References

[edit]
  1. ^ Bertsekas, Dimitri P. (2015). Convex Optimization Algorithms (Second ed.). Belmont, MA.: Athena Scientific. ISBN 978-1-886529-28-1.
  2. ^ Bertsekas, Dimitri P.; Nedic, Angelia; Ozdaglar, Asuman (2003). Convex Analysis and Optimization (Second ed.). Belmont, MA.: Athena Scientific. ISBN 1-886529-45-0.
  3. ^ The approximate convergence of the constant step-size (scaled) subgradient method is stated as Exercise 6.3.14(a) in Bertsekas (page 636): Bertsekas, Dimitri P. (1999). Nonlinear Programming (Second ed.). Cambridge, MA.: Athena Scientific. ISBN 1-886529-00-0. On page 636, Bertsekas attributes this result to Shor: Shor, Naum Z. (1985). Minimization Methods for Non-differentiable Functions. Springer-Verlag. ISBN 0-387-12763-1.
  4. ^ a b Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger and Denis Naddef (ed.). Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. Vol. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN 3-540-42877-1. MR 1900016. S2CID 9048698.
  5. ^ a b Kiwiel, Krzysztof C.; Larsson, Torbjörn; Lindberg, P. O. (August 2007). "Lagrangian relaxation via ballstep subgradient methods". Mathematics of Operations Research. 32 (3): 669–686. doi:10.1287/moor.1070.0261. MR 2348241.
  6. ^ Bertsekas, Dimitri P. (1999). Nonlinear Programming (Second ed.). Cambridge, MA.: Athena Scientific. ISBN 1-886529-00-0.
  7. ^ Kiwiel, Krzysztof (1985). Methods of Descent for Nondifferentiable Optimization. Berlin: Springer Verlag. p. 362. ISBN 978-3540156420. MR 0797754.

Further reading

[edit]
[edit]
  • EE364A and EE364B, Stanford's convex optimization course sequence.