SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "AMNE:(NATURVETENSKAP Matematik Beräkningsmatematik) ;pers:(Burdakov Oleg 1953)"

Sökning: AMNE:(NATURVETENSKAP Matematik Beräkningsmatematik) > Burdakov Oleg 1953

  • Resultat 1-10 av 38
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • A generalised PAV algorithm for monotonic regression in several variables
  • 2004
  • Ingår i: COMPSTAT. Proceedings in Computational Statistics. - Heidelberg, NY : PhysicaVerlag/Springer. - 3790815543 ; , s. 761-767
  • Konferensbidrag (refereegranskat)abstract
    • We present a new algorithm for monotonic regression in one or more explanatory variables. Formally, our method generalises the well-known PAV (pool-adjacent-violators) algorithm from fully to partially ordered data. The computational complexity of our algorithm is O(n2). The goodness-of-fit to observed data is much closer to optimal than for simple averaging techniques.
  •  
2.
  • Hussian, Mohamed, 1969-, et al. (författare)
  • Monotonic regression for assessment of trends in environmental quality data
  • 2004
  • Ingår i: European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS. - Jyväskylä : University of Jyväskylä, Department of Mathematical Information Technology. - 9513918688 ; , s. 1-12
  • Konferensbidrag (refereegranskat)abstract
    • Monotonic regression is a non-parametric method that is designed especially for applications in which the expected value of a response variable increases or decreases in one or more explanatory variables. Here, we show how the recently developed generalised pool-adjacent-violators (GPAV) algorithm can greatly facilitate the assessment of trends in time series of environmental quality data. In particular, we present new methods for simultaneous extraction of a monotonic trend and seasonal components, and for normalisation of environmental quality data that are influenced by random variation in weather conditions or other forms of natural variability. The general aim of normalisation is to clarify the human impact on the environment by suppressing irrelevant variation in the collected data. Our method is designed for applications that satisfy the following conditions: (i) the response variable under consideration is a monotonic function of one or more covariates; (ii) the anthropogenic temporal trend is either increasing or decreasing; (iii) the seasonal variation over a year can be defined by one increasing and one decreasing function. Theoretical descriptions of our methodology are accompanied by examples of trend assessments of water quality data and normalisation of the mercury concentration in cod muscle in relation to the length of the analysed fish.
  •  
3.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • An algorithm for isotonic regression problems
  • 2004
  • Ingår i: European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS. - Jyväskylä : University of Jyväskylä. - 9513918688 ; , s. 1-9
  • Konferensbidrag (refereegranskat)abstract
    • We consider the problem of minimizing the distance from a given n-dimensional vector to a set defined by constraintsof the form   xi  xj Such constraints induce a partial order of the components xi, which can be illustrated by an acyclic directed graph.This problem is known as the isotonic regression (IR) problem. It has important applications in statistics, operations research and signal processing. The most of the applied IR problems are characterized by a very large value of n. For such large-scale problems, it is of great practical importance to develop algorithms whose complexity does not rise with n too rapidly.The existing optimization-based algorithms and statistical IR algorithms have either too high computational complexity or too low accuracy of the approximation to the optimal solution they generate. We introduce a new IR algorithm, which can be viewed as a generalization of the Pool-Adjacent-Violator (PAV) algorithm from completely to partially ordered data. Our algorithm combines both low computational complexity O(n2) and high accuracy. This allows us to obtain sufficiently accurate solutions to the IR problems with thousands of observations.
  •  
4.
  • Burdakov, Oleg, 1953-, et al. (författare)
  • A Dual Active-Set Algorithm for Regularized Monotonic Regression
  • 2017
  • Ingår i: Journal of Optimization Theory and Applications. - : Springer. - 0022-3239 .- 1573-2878. ; 172:3, s. 929-949
  • Tidskriftsartikel (refereegranskat)abstract
    • Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity constraints. The resulting smoothed monotonic regression is a convex quadratic optimization problem. We focus on the case, where the set of observations is completely (linearly) ordered. Our smoothed pool-adjacent-violators algorithm is designed for solving the regularized problem. It belongs to the class of dual active-set algorithms. We prove that it converges to the optimal solution in a finite number of iterations that does not exceed the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of our algorithm grows quadratically with the problem size, we found its running time to grow almost linearly in our computational experiments.
  •  
5.
  • Sysoev, Oleg, 1981-, et al. (författare)
  • A smoothed monotonic regression via L2 regularization
  • 2019
  • Ingår i: Knowledge and Information Systems. - : Springer. - 0219-1377 .- 0219-3116. ; 59:1, s. 197-218
  • Tidskriftsartikel (refereegranskat)abstract
    • Monotonic regression is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term. In order to achieve a low computational complexity and at the same time to provide a high predictive power of the method, we introduce a probabilistically motivated approach for selecting the regularization parameters. In addition, we present a technique for correcting inconsistencies on the boundary. We show that the complexity of the proposed method is O(n2). Our simulations demonstrate that when the data are large and the expected response is a complicated function (which is typical in machine learning applications) or when there is a change point in the response, the proposed method has a higher predictive power than many of the existing methods.
  •  
6.
  • Sysoev, Oleg, et al. (författare)
  • Bootstrap confidence intervals for large-scale multivariate monotonic regression problems
  • 2016
  • Ingår i: Communications in statistics. Simulation and computation. - : Taylor & Francis. - 0361-0918 .- 1532-4141. ; 45:3, s. 1025-1040
  • Tidskriftsartikel (refereegranskat)abstract
    • Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms isillustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.
  •  
7.
  • Andersson, Mats, et al. (författare)
  • Global search strategies for solving multilinear least-squares problems
  • 2012
  • Ingår i: Sultan Qaboos University Journal for Science. - : Sultan Qaboos University. - 1027-524X. ; 17:1, s. 12-21
  • Tidskriftsartikel (refereegranskat)abstract
    • The multilinear least-squares (MLLS) problem is an extension of the linear leastsquares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by results of numerical experiments performed for some problems related to the design of filter networks.
  •  
8.
  • Andersson, Mats, et al. (författare)
  • Global Search Strategies for Solving Multilinear Least-squares Problems
  • 2011
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinearoperator is used in place of a matrix-vector product. The MLLS istypically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows formoving from one local minimizer to a better one. The efficiencyof this strategy isillustrated by results of numerical experiments performed forsome problems related to the design of filter networks.
  •  
9.
  • Andersson, Mats, et al. (författare)
  • Sparsity Optimization in Design of Multidimensional Filter Networks
  • 2013
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Filter networks is a powerful tool used for reducing the image processing time, while maintaining its reasonably high quality.They are composed of sparse sub-filters whose low sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. If to disregard the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers. Each of the local minimizers is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.
  •  
10.
  • Andersson, Mats, et al. (författare)
  • Sparsity Optimization in Design of Multidimensional Filter Networks
  • 2015
  • Ingår i: Optimization and Engineering. - : Springer. - 1389-4420 .- 1573-2924. ; 16:2, s. 259-277
  • Tidskriftsartikel (refereegranskat)abstract
    • Filter networks are used as a powerful tool used for reducing the image processing time and maintaining high image quality.They are composed of sparse sub-filters whose high sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. Even when disregarding the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers, each of which is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 38

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy