SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Burdakov Oleg 1953 ) ;mspu:(article)"

Sökning: WFRF:(Burdakov Oleg 1953 ) > Tidskriftsartikel

  • Resultat 1-10 av 25
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • A Dual Active-Set Algorithm for Regularized Monotonic Regression
  • 2017
  • Ingår i: Journal of Optimization Theory and Applications. - : Springer. - 0022-3239 .- 1573-2878. ; 172:3, s. 929-949
  • Tidskriftsartikel (refereegranskat)abstract
    • Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity constraints. The resulting smoothed monotonic regression is a convex quadratic optimization problem. We focus on the case, where the set of observations is completely (linearly) ordered. Our smoothed pool-adjacent-violators algorithm is designed for solving the regularized problem. It belongs to the class of dual active-set algorithms. We prove that it converges to the optimal solution in a finite number of iterations that does not exceed the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of our algorithm grows quadratically with the problem size, we found its running time to grow almost linearly in our computational experiments.
  •  
2.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • A Dual Active-Set Algorithm for Regularized Slope-Constrained Monotonic Regression
  • 2017
  • Ingår i: Iranian Journal of Operations Research. - Tehran : CMV Verlag. - 2008-1189. ; 8:2, s. 40-47
  • Tidskriftsartikel (refereegranskat)abstract
    • In many problems, it is necessary to take into account monotonic relations. Monotonic (isotonic) Regression (MR) is often involved in solving such problems. The MR solutions are of a step-shaped form with a typical sharp change of values between adjacent steps. This, in some applications, is regarded as a disadvantage. We recently introduced a Smoothed MR (SMR) problem which is obtained from the MR by adding a regularization penalty term. The SMR is aimed at smoothing the aforementioned sharp change. Moreover, its solution has a far less pronounced step-structure, if at all available. The purpose of this paper is to further improve the SMR solution by getting rid of such a structure. This is achieved by introducing a lowed bound on the slope in the SMR. We call it Smoothed Slope-Constrained MR (SSCMR) problem. It is shown here how to reduce it to the SMR which is a convex quadratic optimization problem. The Smoothed Pool Adjacent Violators (SPAV) algorithm developed in our recent publications for solving the SMR problem is adapted here to solving the SSCMR problem. This algorithm belongs to the class of dual active-set algorithms. Although the complexity of the SPAV algorithm is o(n2) its running time is growing in our computational experiments almost linearly with n. We present numerical results which illustrate the predictive performance quality of our approach. They also show that the SSCMR solution is free of the undesirable features of the MR and SMR solutions.
  •  
3.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • Data preordering in generalized PAV algorithm for monotonic regression
  • 2006
  • Ingår i: Journal of Computational Mathematics. - 0254-9409 .- 1991-7139. ; 24:6, s. 771-790
  • Tidskriftsartikel (refereegranskat)abstract
    • Monotonic regression (MR) is a least distance problem with monotonicity constraints induced by a partially ordered data set of observations. In our recent publication [In Ser. {\sl Nonconvex Optimization and Its Applications}, Springer-Verlag, (2006) {\bf 83}, pp. 25-33], the Pool-Adjacent-Violators algorithm (PAV) was generalized from completely to partially ordered data sets (posets). The new algorithm, called GPAV, is characterized by the very low computational complexity, which is of second order in the number of observations. It treats the observations in a consecutive order, and it can follow any arbitrarily chosen topological order of the poset of observations. The GPAV algorithm produces a sufficiently accurate solution to the MR problem, but the accuracy depends on the chosen topological order. Here we prove that there exists a topological order for which the resulted GPAV solution is optimal. Furthermore, we present results of extensive numerical experiments, from which we draw conclusions about the most and the least preferable topological orders.
  •  
4.
  • Sysoev, Oleg, 1981-, et al. (författare)
  • A smoothed monotonic regression via L2 regularization
  • 2019
  • Ingår i: Knowledge and Information Systems. - : Springer. - 0219-1377 .- 0219-3116. ; 59:1, s. 197-218
  • Tidskriftsartikel (refereegranskat)abstract
    • Monotonic regression is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term. In order to achieve a low computational complexity and at the same time to provide a high predictive power of the method, we introduce a probabilistically motivated approach for selecting the regularization parameters. In addition, we present a technique for correcting inconsistencies on the boundary. We show that the complexity of the proposed method is O(n2). Our simulations demonstrate that when the data are large and the expected response is a complicated function (which is typical in machine learning applications) or when there is a change point in the response, the proposed method has a higher predictive power than many of the existing methods.
  •  
5.
  • Sysoev, Oleg, et al. (författare)
  • Bootstrap confidence intervals for large-scale multivariate monotonic regression problems
  • 2016
  • Ingår i: Communications in statistics. Simulation and computation. - : Taylor & Francis. - 0361-0918 .- 1532-4141. ; 45:3, s. 1025-1040
  • Tidskriftsartikel (refereegranskat)abstract
    • Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms isillustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.
  •  
6.
  • Sysoev, Oleg, et al. (författare)
  • Bootstrap estimation of the variance of the error term in monotonic regression models
  • 2013
  • Ingår i: Journal of Statistical Computation and Simulation. - : Taylor & Francis Group. - 0094-9655 .- 1563-5163. ; 83:4, s. 625-638
  • Tidskriftsartikel (refereegranskat)abstract
    • The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.
  •  
7.
  • Andersson, Mats, et al. (författare)
  • Global search strategies for solving multilinear least-squares problems
  • 2012
  • Ingår i: Sultan Qaboos University Journal for Science. - : Sultan Qaboos University. - 1027-524X. ; 17:1, s. 12-21
  • Tidskriftsartikel (refereegranskat)abstract
    • The multilinear least-squares (MLLS) problem is an extension of the linear leastsquares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by results of numerical experiments performed for some problems related to the design of filter networks.
  •  
8.
  • Andersson, Mats, et al. (författare)
  • Sparsity Optimization in Design of Multidimensional Filter Networks
  • 2015
  • Ingår i: Optimization and Engineering. - : Springer. - 1389-4420 .- 1573-2924. ; 16:2, s. 259-277
  • Tidskriftsartikel (refereegranskat)abstract
    • Filter networks are used as a powerful tool used for reducing the image processing time and maintaining high image quality.They are composed of sparse sub-filters whose high sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. Even when disregarding the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers, each of which is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.
  •  
9.
  • Brust, Johannes, et al. (författare)
  • A dense initialization for limited-memory quasi-Newton methods
  • 2019
  • Ingår i: Computational Optimization and Applications. - : Springer. - 0926-6003 .- 1573-2894. ; 74:1, s. 121-142
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • We consider a family of dense initializations for limited-memory quasi-Newton methods. The proposed initialization exploits an eigendecomposition-based separation of the full space into two complementary subspaces, assigning a different initialization parameter to each subspace. This family of dense initializations is proposed in the context of a limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) trust-region method that makes use of a shape-changing norm to define each subproblem. As with L-BFGS methods that traditionally use diagonal initialization, the dense initialization and the sequence of generated quasi-Newton matrices are never explicitly formed. Numerical experiments on the CUTEst test set suggest that this initialization together with the shape-changing trust-region method outperforms other L-BFGS methods for solving general nonconvex unconstrained optimization problems. While this dense initialization is proposed in the context of a special trust-region method, it has broad applications for more general quasi-Newton trust-region and line search methods. In fact, this initialization is suitable for use with any quasi-Newton update that admits a compact representation and, in particular, any member of the Broyden class of updates.
  •  
10.
  • Burdakov, Oleg, 1953- (författare)
  • A greedy algorithm for the optimal basis problem
  • 1997
  • Ingår i: BIT Numerical Mathematics. - : Springer. - 0006-3835 .- 1572-9125. ; 37:3, s. 591-599
  • Tidskriftsartikel (refereegranskat)abstract
    • The following problem is considered. Given m+1 points {x i }0 m in R n which generate an m-dimensional linear manifold, construct for this manifold a maximally linearly independent basis that consists of vectors of the form x i −x j . This problem is present in, e.g., stable variants of the secant and interpolation methods, where it is required to approximate the Jacobian matrix f′ of a nonlinear mappingf by using values off computed at m+1 points. In this case, it is also desirable to have a combination of finite differences with maximal linear independence. As a natural measure of linear independence, we consider the hadamard condition number which is minimized to find an optimal combination of m pairs {x i ,x j }. We show that the problem is not NP-hard, but can be reduced to the minimum spanning tree problem, which is solved by the greedy algorithm in O(m 2) time. The complexity of this reduction is equivalent to one m×n matrix-matrix multiplication, and according to the Coppersmith-Winograd estimate, is below O(n 2.376) for m=n. Applications of the algorithm to interpolation methods are discussed.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 25

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy