SwePub
Sök i LIBRIS databas

  Utökad sökning

id:"swepub:oai:gup.ub.gu.se/123577"
 

Sökning: id:"swepub:oai:gup.ub.gu.se/123577" > Ergodic convergence...

Ergodic convergence in subgradient optimization - with application to simplicial decomposition of convex programs

Larsson, Torbjörn (författare)
Linköpings universitet,Optimeringslära,Tekniska högskolan
Patriksson, Michael, 1964 (författare)
Linköpings universitet,Gothenburg University,Göteborgs universitet,Institutionen för matematiska vetenskaper, matematik,Department of Mathematical Sciences, Mathematics,Chalmers tekniska högskola,Chalmers University of Technology,University of Gothenburg,Matematiska institutionen,Tekniska högskolan
Strömberg, Ann-Brith, 1961 (författare)
Gothenburg University,Göteborgs universitet,Institutionen för matematiska vetenskaper, matematik,Department of Mathematical Sciences, Mathematics,Chalmers tekniska högskola,Chalmers University of Technology,University of Gothenburg,Chalmers, Göteborg, Sweden
 (creator_code:org_t)
Providence, Rhode Island : American Mathematical Society, 2012
2012
Engelska.
Ingår i: Contemporary Mathematics. - Providence, Rhode Island : American Mathematical Society. - 0271-4132 .- 1098-3627. ; 568, s. 159-186
  • Tidskriftsartikel (refereegranskat)
Abstract Ämnesord
Stäng  
  • When non-smooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in general not accumulate to subgradients that verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of subgradient methods in terms of the approximate fulfilment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes. As a means of overcoming these weaknesses in subgradient methods, we introduced in LPS96b, LPS96c, and LPS98 the computation of an ergodic (averaged) sequence of subgradients. Specifically, we considered a non-smooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and showed that the elements of the ergodic sequence of subgradients in the limit fulfil the optimality conditions at the optimal solution, to which the sequence of iterates converges. This result has three important implications. The first is the finite identification of active constraints at the solution obtained in the limit. The second is the establishment of the convergence of ergodic sequences of Lagrange multipliers; this result enables sensitivity analyses for solutions obtained by subgradient methods. The third is the convergence of a lower bounding procedure based on an ergodic sequence of affine underestimates of the objective function; this procedure also provides a proper termination criterion for subgradient optimization methods. This article contributes first an overview of results and applications found in LPS96b, LPS96c, and LPS98 pertaining to the generation of ergodic sequences of subgradients generated within a subgradient scheme. It then presents an application of these results to that of the first instance of a simplicial decomposition algorithm for convex and non-smooth optimization problems.

Ämnesord

NATURVETENSKAP  -- Matematik -- Beräkningsmatematik (hsv//swe)
NATURAL SCIENCES  -- Mathematics -- Computational Mathematics (hsv//eng)

Nyckelord

Non-smooth minimization
conditional subgradient optimization
ergodic convergence
simplicial decomposition
ergodic convergence
MATHEMATICS

Publikations- och innehållstyp

ref (ämneskategori)
art (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy