SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Yi Xinlei) srt2:(2020)"

Sökning: WFRF:(Yi Xinlei) > (2020)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Beal, Jacob, et al. (författare)
  • Robust estimation of bacterial cell count from optical density
  • 2020
  • Ingår i: Communications Biology. - : Springer Science and Business Media LLC. - 2399-3642. ; 3:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data.
  •  
2.
  • Yang, T., et al. (författare)
  • Distributed least squares solver for network linear equations
  • 2020
  • Ingår i: Automatica. - : Elsevier Ltd. - 0005-1098 .- 1873-2836. ; 113
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we study the problem of finding the least square solutions of over-determined linear algebraic equations over networks in a distributed manner. Each node has access to one of the linear equations and holds a dynamic state. We first propose a distributed least square solver over connected undirected interaction graphs and establish a necessary and sufficient on the step-size under which the algorithm exponentially converges to the least square solution. Next, we develop a distributed least square solver over strongly connected directed graphs and show that the proposed algorithm exponentially converges to the least square solution provided the step-size is sufficiently small. Moreover, we develop a finite-time least square solver by equipping the proposed algorithms with a finite-time decentralized computation mechanism. The theoretical findings are validated and illustrated by numerical simulation examples.
  •  
3.
  • Yi, Xinlei, et al. (författare)
  • A Distributed Primal-Dual Algorithm for Bandit Online Convex Optimization with Time-Varying Coupled Inequality Constraints
  • 2020
  • Ingår i: Proceedings 2020 American Control Conference, ACC 2020. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 327-332
  • Konferensbidrag (refereegranskat)abstract
    • This paper considers distributed bandit online optimization with time-varying coupled inequality constraints. The global cost and the coupled constraint functions are the summations of local convex cost and constraint functions, respectively. The local cost and constraint functions are held privately and only at the end of each period the constraint functions are fully revealed, while only the values of cost functions at queried points are revealed, i.e., in a so called bandit manner. A distributed bandit online primal-dual algorithm with two queries for the cost functions per period is proposed. The performance of the algorithm is evaluated using its expected regret, i.e., the expected difference between the outcome of the algorithm and the optimal choice in hindsight, as well as its constraint violation. We show that O(T-c) expected regret and O(T1-c/2) constraint violation are achieved by the proposed algorithm, where T is the total number of iterations and c is an element of [0.5, 1) is a user-defined trade-off parameter. Assuming Slater's condition, we show that O(root T) expected regret and O(root T) constraint violation are achieved. The theoretical results are illustrated by numerical simulations.
  •  
4.
  • Yi, Xinlei, et al. (författare)
  • Distributed Online Convex Optimization With Time-Varying Coupled Inequality Constraints
  • 2020
  • Ingår i: IEEE Transactions on Signal Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-587X .- 1941-0476. ; 68, s. 731-746
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper considers distributed online optimization with time-varying coupled inequality constraints. The global objective function is composed of local convex cost and regularization functions and the coupled constraint function is the sum of local convex functions. Adistributed online primal-dual dynamic mirror descent algorithm is proposed to solve this problem, where the local cost, regularization, and constraint functions are held privately and revealed only after each time slot. Without assuming Slater's condition, we first derive regret and constraint violation bounds for the algorithm and show how they depend on the stepsize sequences, the accumulated dynamic variation of the comparator sequence, the number of agents, and the network connectivity. As a result, under some natural decreasing stepsize sequences, we prove that the algorithm achieves sublinear dynamic regret and constraint violation if the accumulated dynamic variation of the optimal sequence also grows sublinearly. We also prove that the algorithm achieves sublinear static regret and constraint violation under mild conditions. Assuming Slater's condition, we show that the algorithm achieves smaller bounds on the constraint violation. In addition, smaller bounds on the static regret are achieved when the objective function is strongly convex. Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.
  •  
5.
  • Yi, Xinlei, 1990- (författare)
  • Distributed Optimization and Control : Primal--Dual, Online, and Event-Triggered Algorithms
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In distributed optimization and control, each network node performs local computation based on its own information and information received from its neighbors through a communication network to achieve a global objective. Although many distributed optimization and control algorithms have been proposed, core theoretical problems with important practical relevance remain. For example, what convergence properties can be obtained for nonconvex problems? How to tackle time-varying cost and constraint functions? Can these algorithms work under limited communication resources? This thesis contributes to answering these questions by providing new algorithms with better convergence rates under less information exchange than existing algorithms. It consists of three parts.In the first part, we consider distributed nonconvex optimization problems. It is hard to solve these problems and often only stationary points can be found. We propose distributed primal--dual optimization algorithms under different information feedback settings. Specifically, when full-information feedback or deterministic zeroth-order oracle feedback is available, we show that the proposed algorithms converge sublinearly to a stationary point if each local cost function is smooth. They converge linearly to a global optimum if the global cost function also satisfies the Polyak--{\L}ojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition in the literature for proving linear convergence of distributed optimization algorithms. When stochastic gradient feedback or stochastic zeroth-order oracle feedback is available, we show that the proposed algorithms achieve linear speedup convergence rates, meaning that the convergence rates decrease linearly with the number of computing nodes.In the second part, distributed online convex optimization problems are considered. For such problems, the cost and constraint functions are revealed at the end of each time slot. We focus on time-varying coupled inequality constraints and time-varying directed communication networks. We propose one primal--dual dynamic mirror descent algorithm and two bandit primal--dual algorithms. It is shown that these distributed algorithms achieve the same sublinear regret and constraint violation bounds as existing centralized algorithms.In the third and final part, in order to achieve a common control objective for a networked system, we propose distributed event-triggered algorithms to reduce the amount of information exchanged. Specifically, we propose dynamic event-triggered control algorithms to solve the average consensus problem for first-order systems, the global consensus problem for systems with input saturation, and the formation control problem with connectivity preservation for first- and second-order systems. We show that these algorithms do not exhibit Zeno behavior and that they achieve exponential convergence rates.
  •  
6.
  • Yi, Xinlei, et al. (författare)
  • Linear Convergence for Distributed Optimization without Strong Convexity
  • 2020
  • Ingår i: Proceedings of the IEEE Conference on Decision and Control. - : Institute of Electrical and Electronics Engineers Inc.. ; , s. 3643-3648
  • Konferensbidrag (refereegranskat)abstract
    • This paper considers the distributed optimization problem of minimizing a global cost function formed by a sum of local smooth cost functions by using local information exchange. Various distributed optimization algorithms have been proposed for solving such a problem. A standard condition for proving the linear convergence for existing distributed algorithms is the strong convexity of the cost functions. However, the strong convexity may not hold for many practical applications, such as least squares and logistic regression. In this paper, we propose a distributed primal-dual gradient descent algorithm and establish its linear convergence under the condition that the global cost function satisfies the Polyak-Lojasiewicz condition. This condition is weaker than strong convexity and the global minimizer is not necessarily unique. The theoretical result is illustrated by numerical simulations. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy