SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Burdakov Oleg 1953 ) "

Sökning: WFRF:(Burdakov Oleg 1953 )

  • Resultat 11-20 av 54
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
11.
  • Sysoev, Oleg, 1981-, et al. (författare)
  • A smoothed monotonic regression via L2 regularization
  • 2019
  • Ingår i: Knowledge and Information Systems. - : Springer. - 0219-1377 .- 0219-3116. ; 59:1, s. 197-218
  • Tidskriftsartikel (refereegranskat)abstract
    • Monotonic regression is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term. In order to achieve a low computational complexity and at the same time to provide a high predictive power of the method, we introduce a probabilistically motivated approach for selecting the regularization parameters. In addition, we present a technique for correcting inconsistencies on the boundary. We show that the complexity of the proposed method is O(n2). Our simulations demonstrate that when the data are large and the expected response is a complicated function (which is typical in machine learning applications) or when there is a change point in the response, the proposed method has a higher predictive power than many of the existing methods.
  •  
12.
  • Sysoev, Oleg, et al. (författare)
  • Bootstrap confidence intervals for large-scale multivariate monotonic regression problems
  • 2016
  • Ingår i: Communications in statistics. Simulation and computation. - : Taylor & Francis. - 0361-0918 .- 1532-4141. ; 45:3, s. 1025-1040
  • Tidskriftsartikel (refereegranskat)abstract
    • Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms isillustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.
  •  
13.
  • Sysoev, Oleg, et al. (författare)
  • Bootstrap estimation of the variance of the error term in monotonic regression models
  • 2013
  • Ingår i: Journal of Statistical Computation and Simulation. - : Taylor & Francis Group. - 0094-9655 .- 1563-5163. ; 83:4, s. 625-638
  • Tidskriftsartikel (refereegranskat)abstract
    • The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.
  •  
14.
  • Sysoev, Oleg, 1981-, et al. (författare)
  • New optimization methods for isotonic regression in L1 norm
  • 2007
  • Ingår i: EUROPT-OMS Conference on Optimization,2007. - University of Hradec Kralove, Czech Republic : Guadeamus. ; , s. 133-133
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Isotonic regression problem (IR) has numerous important applications in statistics, operations research, biology, image and signal processing and other areas. IR is a minimization problem with the objective function defined by the distance from a given point to a convex set defined by monotonicity constraints of the form: i-th component of the decision vector is less or equal to its j-th component. The distance in IR is usually associated with the Lp norm, whereas the norms L1 and L2 are of the highest practical interest. The conventional optimization methods are unable to solve large-scale IR problems originating from large data sets. Historically, the major efforts were focused on IR problem in the L2 norm. Exact algorithms such as the minimum lower sets algorithm by Brunk, the min-max algorithm by Lee, the network flow algorithm by Maxwell & Muchstadt and the IBCR algorithm by Block et al. were developed. Among them the IBCR algorithm has been proved to be the most numerically efficient, but it is not robust enough. An alternative approach is related to solving IR problem approximately. Following this approach, Burdakov et al. developed GPAV algorithm whose block refinement extension, GPAVR, is able to solve IR problem with high accuracy in a far shorter time than the exact algorithms. Apart from this, GPAVR is a very robust algorithm. Unfortunately, for the norm L1 there are no algorithms which are as efficient as those in L2 norm. In our talk, we introduce new algorithms, GPAVR1 and IBCR1. They are extensions of the algorithms GPAV and IBCR to L1 norm. We present also results of numerical experiments, which demonstrate the high efficiency of the new algorithms, especially for very large-scale problems.
  •  
15.
  • Andersson, Mats, et al. (författare)
  • Global search strategies for solving multilinear least-squares problems
  • 2012
  • Ingår i: Sultan Qaboos University Journal for Science. - : Sultan Qaboos University. - 1027-524X. ; 17:1, s. 12-21
  • Tidskriftsartikel (refereegranskat)abstract
    • The multilinear least-squares (MLLS) problem is an extension of the linear leastsquares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by results of numerical experiments performed for some problems related to the design of filter networks.
  •  
16.
  • Andersson, Mats, et al. (författare)
  • Global Search Strategies for Solving Multilinear Least-squares Problems
  • 2011
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinearoperator is used in place of a matrix-vector product. The MLLS istypically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows formoving from one local minimizer to a better one. The efficiencyof this strategy isillustrated by results of numerical experiments performed forsome problems related to the design of filter networks.
  •  
17.
  • Andersson, Mats, et al. (författare)
  • Sparsity Optimization in Design of Multidimensional Filter Networks
  • 2013
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Filter networks is a powerful tool used for reducing the image processing time, while maintaining its reasonably high quality.They are composed of sparse sub-filters whose low sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. If to disregard the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers. Each of the local minimizers is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.
  •  
18.
  • Andersson, Mats, et al. (författare)
  • Sparsity Optimization in Design of Multidimensional Filter Networks
  • 2015
  • Ingår i: Optimization and Engineering. - : Springer. - 1389-4420 .- 1573-2924. ; 16:2, s. 259-277
  • Tidskriftsartikel (refereegranskat)abstract
    • Filter networks are used as a powerful tool used for reducing the image processing time and maintaining high image quality.They are composed of sparse sub-filters whose high sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. Even when disregarding the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers, each of which is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.
  •  
19.
  • Anistratov, Pavel, 1990-, et al. (författare)
  • Autonomous-Vehicle Maneuver Planning Using Segmentation and the Alternating Augmented Lagrangian Method
  • 2020
  • Ingår i: 21th IFAC World Congress Proceedings. - : Elsevier. ; 53:2, s. 15558-15565
  • Konferensbidrag (refereegranskat)abstract
    • Segmenting a motion-planning problem into smaller subproblems could be beneficial in terms of computational complexity. This observation is used as a basis for a new sub-maneuver decomposition approach investigated in this paper in the context of optimal evasive maneuvers for autonomous ground vehicles. The recently published alternating augmented Lagrangianmethod is adopted and leveraged on, which turns out to fit the problem formulation with several attractive properties of the solution procedure. The decomposition is based on moving the coupling constraints between the sub-maneuvers into a separate coordination problem, which is possible to solve analytically. The remaining constraints and the objective function are decomposed into subproblems, one for each segment, which means that parallel computation is possible and benecial. The method is implemented and evaluated in a safety-critical double lane-change scenario. By using the solution of a low-complexity initialization problem and applying warm-start techniques in the optimization, a solution is possible to obtain after just a few alternating iterations using the developed approach. The resulting computational time is lower than solving one optimization problem for the full maneuver.
  •  
20.
  • Brust, Johannes, et al. (författare)
  • A dense initialization for limited-memory quasi-Newton methods
  • 2019
  • Ingår i: Computational Optimization and Applications. - : Springer. - 0926-6003 .- 1573-2894. ; 74:1, s. 121-142
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • We consider a family of dense initializations for limited-memory quasi-Newton methods. The proposed initialization exploits an eigendecomposition-based separation of the full space into two complementary subspaces, assigning a different initialization parameter to each subspace. This family of dense initializations is proposed in the context of a limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) trust-region method that makes use of a shape-changing norm to define each subproblem. As with L-BFGS methods that traditionally use diagonal initialization, the dense initialization and the sequence of generated quasi-Newton matrices are never explicitly formed. Numerical experiments on the CUTEst test set suggest that this initialization together with the shape-changing trust-region method outperforms other L-BFGS methods for solving general nonconvex unconstrained optimization problems. While this dense initialization is proposed in the context of a special trust-region method, it has broad applications for more general quasi-Newton trust-region and line search methods. In fact, this initialization is suitable for use with any quasi-Newton update that admits a compact representation and, in particular, any member of the Broyden class of updates.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 11-20 av 54

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy