SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "swepub ;conttype:(scientificother);pers:(Burdakov Oleg 1953)"

Sökning: swepub > Övrigt vetenskapligt/konstnärligt > Burdakov Oleg 1953

  • Resultat 1-10 av 24
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Andersson, Mats, et al. (författare)
  • Global Search Strategies for Solving Multilinear Least-squares Problems
  • 2011
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinearoperator is used in place of a matrix-vector product. The MLLS istypically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows formoving from one local minimizer to a better one. The efficiencyof this strategy isillustrated by results of numerical experiments performed forsome problems related to the design of filter networks.
  •  
2.
  • Andersson, Mats, et al. (författare)
  • Sparsity Optimization in Design of Multidimensional Filter Networks
  • 2013
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Filter networks is a powerful tool used for reducing the image processing time, while maintaining its reasonably high quality.They are composed of sparse sub-filters whose low sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. If to disregard the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers. Each of the local minimizers is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.
  •  
3.
  • Brust, Johannes, et al. (författare)
  • A dense initialization for limited-memory quasi-Newton methods
  • 2019
  • Ingår i: Computational Optimization and Applications. - : Springer. - 0926-6003 .- 1573-2894. ; 74:1, s. 121-142
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • We consider a family of dense initializations for limited-memory quasi-Newton methods. The proposed initialization exploits an eigendecomposition-based separation of the full space into two complementary subspaces, assigning a different initialization parameter to each subspace. This family of dense initializations is proposed in the context of a limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) trust-region method that makes use of a shape-changing norm to define each subproblem. As with L-BFGS methods that traditionally use diagonal initialization, the dense initialization and the sequence of generated quasi-Newton matrices are never explicitly formed. Numerical experiments on the CUTEst test set suggest that this initialization together with the shape-changing trust-region method outperforms other L-BFGS methods for solving general nonconvex unconstrained optimization problems. While this dense initialization is proposed in the context of a special trust-region method, it has broad applications for more general quasi-Newton trust-region and line search methods. In fact, this initialization is suitable for use with any quasi-Newton update that admits a compact representation and, in particular, any member of the Broyden class of updates.
  •  
4.
  • Brust, Johannes, et al. (författare)
  • Shape-Changing L-SR1 Trust-Region Methods
  • 2016
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • In this article, we propose a method for solving the trust-region subproblem when a limited-memory symmetric rank-one matrix is used in place of the true Hessian matrix. The method takes advantage of two shape-changing norms to decompose the trust-region subproblem into two separate problems, one of which has a closed-form solution and the other one is easy to solve. Sufficient conditions for global solutions to both subproblems are given. The proposed solver makes use of the structure of limited-memory symmetric rank-one matrices to find solutions that satisfy these optimality conditions. Solutions to the trust-region subproblem are computed to high-accuracy even in the so-called "hard case".
  •  
5.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • A Dual Ascent Method for the Hop-constrained Shortest Path with Application to Positioning of Unmanned Aerial Vehicles
  • 2008
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We study the problem of positioning unmanned aerial vehicles (UAVs) to maintain an unobstructed flow of communication from a surveying UAV to some base station through the use of multiple relay UAVs. This problem can be modeled as a hopconstrained shortest path problem in a large visibility graph. We propose a dual ascent method for solving this problem, optionally within a branch-and-bound framework. Computational tests show that realistic problems can be solved in a reasonably short time, and that the proposed method is faster than the classical dynamic programming approach.
  •  
6.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • An O(n2) algorithm for isotonic regression
  • 2006
  • Ingår i: Large-Scale Nonlinear Optimization. - New York : Springer Science+Business Media B.V.. - 0387300635 ; , s. 25-33
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • We consider the problem of minimizing the distance from a given n-dimensional vector to a set defined by constraints of the form xi ≤ xj. Such constraints induce a partial order of the components xi, which can be illustrated by an acyclic directed graph. This problem is also known as the isotonic regression (IR) problem. IR has important applications in statistics, operations research and signal processing, with most of them characterized by a very large value of n. For such large-scale problems, it is of great practical importance to develop algorithms whose complexity does not rise with n too rapidly. The existing optimization-based algorithms and statistical IR algorithms have either too high computational complexity or too low accuracy of the approximation to the optimal solution they generate. We introduce a new IR algorithm, which can be viewed as a generalization of the Pool-Adjacent-Violator (PAV) algorithm from completely to partially ordered data. Our algorithm combines both low computational complexity O(n2) and high accuracy. This allows us to obtain sufficiently accurate solutions to IR problems with thousands of observations.
  •  
7.
  •  
8.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • Generalized PAV algorithm with block refinement for partially ordered monotonic regression
  • 2009
  • Ingår i: Proceedings of the Workshop on Learning Monotone Models from Data. ; , s. 23-37
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In this paper, the monotonic regression problem (MR) is considered. We have recentlygeneralized for MR the well-known Pool-Adjacent-Voilators algorithm(PAV) from the case of completely to partially ordered data sets. Thenew algorithm, called GPAV, combines both high accuracy and lowcomputational complexity which grows quadratically with the problemsize. The actual growth observed in practice is typically far lowerthan quadratic. The fitted values of the exact MR solution composeblocks of equal values. Its GPAV approximation has also a blockstructure. We present here a technique for refining blocks produced bythe GPAV algorithm to make the new blocks more close to those in theexact solution. This substantially improves the accuracy of the GPAVsolution and does not deteriorate its computational complexity. Thecomputational time for the new technique is approximately triple thetime of running the GPAV algorithm. Its efficiency is demonstrated byresults of our numerical experiments.
  •  
9.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • Mathematical Programs with Cardinality Constraints : Reformulation by Complementarity-type Constraints and a Regularization Method
  • 2014
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Optimization problems with cardinality constraints are very difficult mathematical programs which are typically solved by global techniques from discreteoptimization. Here we introduce a mixed-integer formulation whose standard relaxation still has the same solutions (in the sense of global minima) as the underlying cardinality-constrained problem; the relation between thelocal minima is also discussed in detail. Since our reformulation is a mini-mization problem in continuous variables, it allows to apply ideas from thatfield to cardinality-constrained problems. Here, in particular, we therefore also derive suitable stationarity conditions and suggest an appropriate regularization method for the solution of optimization problems with cardinalityconstraints. This regularization method is shown to be globally convergentto a Mordukhovich-stationary point. Extensive numerical results are given to illustrate the behavior of this method.
  •  
10.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • Monotonic data fitting and interpolation with application to postprocessing of FE solutions
  • 2007
  • Ingår i: CERFACS 20th Anniversary Conference on High-performance Computing,2007. ; , s. 11-12
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In this talk we consider the isotonic regression (IR) problem which can be formulated as follows. Given a vector $\bar{x} \in R^n$, find $x_* \in R^n$ which solves the problem: \begin{equation}\label{ir2} \begin{array}{cl} \mbox{min} & \|x-\bar{x}\|^2 \\ \mbox{s.t.} & Mx \ge 0. \end{array} \end{equation} The set of constraints $Mx \ge 0$ represents here the monotonicity relations of the form $x_i \le x_j$ for a given set of pairs of the components of $x$. The corresponding row of the matrix $M$ is composed mainly of zeros, but its $i$th and $j$th elements, which are equal to $-1$ and $+1$, respectively. The most challenging applications of (\ref{ir2}) are characterized by very large values of $n$. We introduce new IR algorithms. Our numerical experiments demonstrate the high efficiency of our algorithms, especially for very large-scale problems, and their robustness. They are able to solve some problems which all existing IR algorithms fail to solve. We outline also our new algorithms for monotonicity-preserving interpolation of scattered multivariate data. In this talk we focus on application of our IR algorithms in postprocessing of FE solutions. Non-monotonicity of the numerical solution is a typical drawback of the conventional methods of approximation, such as finite elements (FE), finite volumes, and mixed finite elements. The problem of monotonicity is particularly important in cases of highly anisotropic diffusion tensors or distorted unstructured meshes. For instance, in the nuclear waste transport simulation, the non-monotonicity results in the presence of negative concentrations which may lead to unacceptable concentration and chemistry calculations failure. Another drawback of the conventional methods is a possible violation of the discrete maximum principle, which establishes lower and upper bounds for the solution. We suggest here a least-change correction to the available FE solution $\bar{x} \in R^n$. This postprocessing procedure is aimed on recovering the monotonicity and some other important properties that may not be exhibited by $\bar{x}$. The mathematical formulation of the postprocessing problem is reduced to the following convex quadratic programming problem \begin{equation}\label{ls2} \begin{array}{cl} \mbox{min} & \|x-\bar{x}\|^2 \\ \mbox{s.t.} & Mx \ge 0, \quad l \le x \le u, \quad e^Tx = m, \end{array} \end{equation} where$e=(1,1, \ldots ,1)^T \in R^n$. The set of constraints $Mx \ge 0$ represents here the monotonicity relations between some of the adjacent mesh cells. The constraints $l \le x \le u$ originate from the discrete maximum principle. The last constraint formulates the conservativity requirement. The postprocessing based on (\ref{ls2}) is typically a large scale problem. We introduce here algorithms for solving this problem. They are based on the observation that, in the presence of the monotonicity constraints only, problem (\ref{ls2}) is the classical monotonic regression problem, which can be solved efficiently by some of the available monotonic regression algorithms. This solution is used then for producing the optimal solution to problem (\ref{ls2}) in the presence of all the constraints. We present results of numerical experiments to illustrate the efficiency of our algorithms.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 24

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy