Sökning: id:"swepub:oai:DiVA.org:kth-149919" >
Ergodic mirror descent
Abstract
Ämnesord
Stäng
- We generalize stochastic subgradient methods to situations in which we do not receive independent samples from the distribution over which we optimize, but instead receive samples that are coupled over time. We show that as long as the source of randomness is suitably ergodic - it converges quickly enough to a stationary distribution - the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for high-dimensional stochastic optimization, peer-to-peer distributed optimization schemes, and stochastic optimization problems over combinatorial spaces.
Ämnesord
- NATURVETENSKAP -- Matematik -- Beräkningsmatematik (hsv//swe)
- NATURAL SCIENCES -- Mathematics -- Computational Mathematics (hsv//eng)
Nyckelord
- Distributed optimization
- Ergodics
- High probability
- High-dimensional
- Peer to peer
- Stationary distribution
- Stochastic optimization problems
- Stochastic optimizations
- Strong convergence
- Subgradient methods
- Communication
- Probability distributions
- Stochastic systems
- Optimization
Publikations- och innehållstyp
- ref (ämneskategori)
- kon (ämneskategori)
Hitta via bibliotek
Till lärosätets databas