Sökning: id:"swepub:oai:DiVA.org:kth-203812" >
Asynchronous Algori...
Asynchronous Algorithms for Large-Scale Optimization : Analysis and Implementation
-
- Aytekin, Arda, 1986- (författare)
- KTH,Reglerteknik
-
- Johansson, Mikael, Professor (preses)
- KTH,Reglerteknik
-
- Patrinos, Panagiotis K., Assistant Professor (opponent)
- Department of Electrical Engineering, KU Leuven, Belgium
-
(creator_code:org_t)
- ISBN 9789177293286
- Stockholm : KTH Royal Institute of Technology, 2017
- Engelska 100 s.
- Relaterad länk:
-
https://kth.diva-por... (primary) (Raw object)
-
visa fler...
-
https://urn.kb.se/re...
-
visa färre...
Abstract
Ämnesord
Stäng
- This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution.The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound.Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay.Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results.
Ämnesord
- TEKNIK OCH TEKNOLOGIER -- Elektroteknik och elektronik -- Reglerteknik (hsv//swe)
- ENGINEERING AND TECHNOLOGY -- Electrical Engineering, Electronic Engineering, Information Engineering -- Control Engineering (hsv//eng)
Nyckelord
- convex optimization
- optimization
- asynchronous algorithms
- algorithms
- parallel algorithms
- large-scale
- big data
- Electrical Engineering
- Elektro- och systemteknik
Publikations- och innehållstyp
- vet (ämneskategori)
- lic (ämneskategori)
Hitta via bibliotek
Till lärosätets databas