SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Wu Xuyang) "

Search: WFRF:(Wu Xuyang)

  • Result 1-12 of 12
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Berglund, Erik, et al. (author)
  • Revisiting the Curvature-aided IAG : Improved Theory and Reduced Complexity
  • 2023
  • In: IFAC-PapersOnLine. - : Elsevier BV. - 2405-8963. ; 56:2, s. 5221-5226, s. 5221-5226
  • Journal article (peer-reviewed)abstract
    • The curvature-aided IAG (CIAG) algorithm is an efficient asynchronous optimization method that accelerates IAG using a delay compensation technique. However, existing step-size rules for CIAG are conservative and hard to implement, and the Hessian computation in CIAG is often computationally expensive. To alleviate these issues, we first provide an easy-to-implement and less conservative step-size rule for CIAG. Next, we propose a modified CIAG algorithm that reduces the computational complexity by approximating the Hessian with a constant matrix. Convergence results are derived for each algorithm on both convex and strongly convex problems, and numerical experiments on logistic regression demonstrate their effectiveness in practice.
  •  
2.
  • Liu, Changxin, et al. (author)
  • Rate analysis of dual averaging for nonconvex distributed optimization
  • 2023
  • In: IFAC-PapersOnLine. - : Elsevier BV. ; , s. 5209-5214
  • Conference paper (peer-reviewed)abstract
    • This work studies nonconvex distributed constrained optimization over stochastic communication networks. We revisit the distributed dual averaging algorithm, which is known to converge for convex problems. We start from the centralized case, for which the change of two consecutive updates is taken as the suboptimality measure. We validate the use of such a measure by showing that it is closely related to stationarity. This equips us with a handle to study the convergence of dual averaging in nonconvex optimization. We prove that the squared norm of this suboptimality measure converges at rate O(1/t). Then, for the distributed setup we show convergence to the stationary point at rate O(1/t). Finally, a numerical example is given to illustrate our theoretical results.
  •  
3.
  • Wei, Hejie, et al. (author)
  • Decentralized Approximate Newton Methods for Convex Optimization on Networked Systems
  • 2021
  • In: IEEE Transactions on Control of Network Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2325-5870. ; 8:3, s. 1489-1500
  • Journal article (peer-reviewed)abstract
    • In this article, a class of decentralized approximate Newton (DEAN) methods for addressing convex optimization on a networked system is developed, where nodes in the networked system seek a consensus that minimizes the sum of their individual objective functions through local interactions only. The proposed DEAN algorithms allow each node to repeatedly perform a local approximate Newton update, which leverages tracking the global Newton direction and dissipating the discrepancies among the nodes. Under less restrictive problem assumptions in comparison with most existing second-order methods, the DEAN algorithms enable the nodes to reach a consensus that can be arbitrarily close to the optimum. Moreover, for a particular DEAN algorithm, the nodes linearly converge to a common suboptimal solution with an explicit error bound. Finally, simulations demonstrate the competitive performance of DEAN in convergence speed, accuracy, and efficiency.
  •  
4.
  • Wu, Xuyang, et al. (author)
  • A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
  • 2023
  • In: Optimization Methods and Software. - : Informa UK Limited. - 1055-6788 .- 1029-4937. ; 38:4, s. 813-836
  • Journal article (peer-reviewed)abstract
    • In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within (Formula presented.) iterations. Such iteration complexity can be improved to (Formula presented.) if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.
  •  
5.
  • Wu, Xuyang, et al. (author)
  • A New Family of Feasible Methods for Distributed Resource Allocation
  • 2021
  • In: 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665436595 ; , s. 3355-3360
  • Conference paper (peer-reviewed)abstract
    • Distributed resource allocation is a central task in network systems such as smart grids, water distribution networks, and urban transportation systems. When solving such problems in practice it is often important to have non-asymptotic feasibility guarantees for the iterates, since over-allocation of resources easily causes systems to break down. In this paper, we develop a distributed resource reallocation algorithm where every iteration produces a feasible allocation. The algorithm is fully distributed in the sense that nodes communicate only with neighbors over a given communication network. We prove that under mild conditions the algorithm converges to a point arbitrarily close to the optimal resource allocation. Numerical experiments demonstrate the competitive practical performance of the algorithm.
  •  
6.
  • Wu, Xuyang, et al. (author)
  • A Unifying Approximate Method of Multipliers for Distributed Composite Optimization
  • 2022
  • In: IEEE Transactions on Automatic Control. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9286 .- 1558-2523. ; , s. 1-1
  • Journal article (peer-reviewed)abstract
    • This paper investigates solving convex composite optimization on an undirected network, where each node, privately endowed with a smooth component function and a nonsmooth one, is required to minimize the sum of all the component functions throughout the network. To address such a problem, a general Approximate Method of Multipliers (AMM) is developed, which attempts to approximate the Method of Multipliers by virtue of a surrogate function with numerous options. We then design the possibly nonseparable, time-varying surrogate function in various ways, leading to different distributed realizations of AMM. We demonstrate that AMM generalizes more than ten state-of-the-art distributed optimization algorithms, and certain specific designs of its surrogate function result in a variety of new algorithms to the literature. Furthermore, we show that AMM is able to achieve an $O(1/k)$ rate of convergence to optimality, and the convergence rate becomes linear when the problem is locally restricted strongly convex and smooth. Such convergence rates provide new or stronger convergence results to many prior methods that can be viewed as specializations of AMM. 
  •  
7.
  • Wu, Xuyang, et al. (author)
  • Delay-adaptive step-sizes for asynchronous learning
  • 2022
  • In: Proceedings of the 39th International Conference on Machine Learning. - : ML Research Press. ; , s. 24093-24113
  • Conference paper (peer-reviewed)abstract
    • In scalable machine learning systems, model training is often parallelized over multiple nodes that run without tight synchronization. Most analysis results for the related asynchronous algorithms use an upper bound on the information delays in the system to determine learning rates. Not only are such bounds hard to obtain in advance, but they also result in unnecessarily slow convergence. In this paper, we show that it is possible to use learning rates that depend on the actual time-varying delays in the system. We develop general convergence results for delay-adaptive asynchronous iterations and specialize these to proximal incremental gradient descent and block coordinate descent algorithms. For each of these methods, we demonstrate how delays can be measured on-line, present delay-adaptive step-size policies, and illustrate their theoretical and practical advantages over the state-of-the-art.
  •  
8.
  • Wu, Xuyang, et al. (author)
  • Delay-agnostic Asynchronous Coordinate Update Algorithm
  • 2023
  • In: ICML'23. ; , s. 37582-37606
  • Conference paper (peer-reviewed)abstract
    • We propose a delay-agnostic asynchronous coordinate update algorithm (DEGAS) for computing operator fixed points, with applications to asynchronous optimization. DEGAS includes novel asynchronous variants of ADMM and block-coordinate descent as special cases. We prove that DEGAS converges with both bounded and unbounded delays under delay-free parameter conditions. We also validate by theory and experiments that DEGAS adapts well to the actual delays. The effectiveness of DEGAS is demonstrated by numerical experiments on classification problems.
  •  
9.
  • Wu, Xuyang, et al. (author)
  • Delay-agnostic Asynchronous Distributed Optimization
  • 2023
  • In: 2023 62Nd Ieee Conference On Decision And Control, Cdc. - : Institute of Electrical and Electronics Engineers (IEEE). - 9798350301243 ; , s. 1082-1087
  • Conference paper (peer-reviewed)abstract
    • Existing asynchronous distributed optimization algorithms often use diminishing step-sizes that cause slow practical convergence, or fixed step-sizes that depend on an assumed upper bound of delays. Not only is such a delay bound hard to obtain in advance, but it is also large and therefore results in unnecessarily slow convergence. This paper develops asynchronous versions of two distributed algorithms, DGD and DGD-ATC, for solving consensus optimization problems over undirected networks. In contrast to alternatives, our algorithms can converge to the fixed point set of their synchronous counterparts using step-sizes that are independent of the delays. We establish convergence guarantees under both partial and total asynchrony. The practical performance of our algorithms is demonstrated by numerical experiments.
  •  
10.
  • Wu, Xuyang, et al. (author)
  • Distributed Optimization with Coupling Constraints
  • 2022
  • In: IEEE Transactions on Automatic Control. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9286 .- 1558-2523. ; , s. 1-1
  • Journal article (peer-reviewed)abstract
    • In this paper, we investigate distributed convex optimization with both inequality and equality constraints, where the objective function can be a general nonsmooth convex function and all the constraints can be both sparsely and densely coupling. By strategically integrating ideas from primal-dual, proximal, and virtual-queue optimization methods, we develop a novel distributed algorithm, referred to as IPLUX, to address the problem over a connected, undirected graph. We show that IPLUX achieves an $O(1/k)$ rate of convergence in terms of optimality and feasibility, which is stronger than the convergence results of the alternative methods and eliminates the standard assumption on the compactness of the feasible region. Finally, IPLUX exhibits faster convergence and higher efficiency than several state-of-the-art methods in the simulation. 
  •  
11.
  • Wu, Xuyang, et al. (author)
  • Distributed safe resource allocation using barrier functions
  • 2023
  • In: Automatica. - : Elsevier BV. - 0005-1098 .- 1873-2836. ; 153
  • Journal article (peer-reviewed)abstract
    • Resource allocation plays a central role in networked systems such as smart grids, communication networks, and urban transportation systems. In these systems, many constraints have physical meaning and infeasible allocations can result in a system breakdown. Hence, algorithms with asymptotic feasibility guarantees can be insufficient since they cannot ensure that an implementable solution is found in finite time. This paper proposes a distributed feasible method (DFM) for resource allocation based on barrier functions. In DFM, every iterate is feasible and safe to implement since it does not violate the physical constraints. We prove that, under mild conditions, DFM converges to an arbitrarily small neighborhood of the optimum. Numerical experiments demonstrate the competitive performanceof DFM.
  •  
12.
  • Wu, Xuyang, et al. (author)
  • Optimal convergence rates of totally asynchronous optimization
  • 2022
  • In: 2022 IEEE 61st Conference on Decision and Control (CDC), Cancun, Mexico, 2022. - : IEEE (Institute of Electrical and Electronics Engineers). - 9781665467612 ; , s. 6484-6490
  • Conference paper (peer-reviewed)abstract
    • Asynchronous optimization algorithms are at the core of modern machine learning and resource allocation systems. However, most convergence results consider bounded information delays and several important algorithms lack guarantees when they operate under total asynchrony. In this paper, we derive explicit convergence rates for the proximal incremental aggregated gradient (PIAG) and the asynchronous block-coordinate descent (Async-BCD) methods under a specific model of total asynchrony, and show that the derived rates are order-optimal. The convergence bounds provide an insightful understanding of how the growth rate of the delays deteriorates the convergence times of the algorithms. Our theoretical findings are demonstrated by a numerical example.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-12 of 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view