SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Allerbo Oskar 1985) "

Sökning: WFRF:(Allerbo Oskar 1985)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Allerbo, Oskar, 1985 (författare)
  • Efficient training of interpretable, non-linear regression models
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Regression, the process of estimating functions from data, comes in many flavors. One of the most commonly used regression models is linear regression, which is computationally efficient and easy to interpret, but lacks in flexibility. Non-linear regression methods, such as kernel regression and artificial neural networks, tend to be much more flexible, but also harder to interpret and more difficult, and computationally heavy, to train. In the five papers of this thesis, different techniques for constructing regression models that combine flexibility with interpretability and computational efficiency, are investigated. In Papers I and II, sparsely regularized neural networks are used to obtain flexible, yet interpretable, models for additive modeling (Paper I) and dimensionality reduction (Paper II). Sparse regression, in the form of the elastic net, is also covered in Paper III, where the focus is on increased computational efficiency by replacing explicit regularization with iterative optimization and early stopping. In Paper IV, inspired by Jacobian regularization, we propose a computationally efficient method for bandwidth selection for kernel regression with the Gaussian kernel. Kernel regression is also the topic of Paper V, where we revisit efficient regularization through early stopping, by solving kernel regression iteratively. Using an iterative algorithm for kernel regression also enables changing the kernel during training, which we use to obtain a more flexible method, resembling the behavior of neural networks. In all five papers, the results are obtained by carefully selecting either the regularization strength or the bandwidth. Thus, in summary, this work contributes with new statistical methods for combining flexibility with interpretability and computational efficiency based on intelligent hyperparameter selection.
  •  
2.
  • Allerbo, Oskar, 1985, et al. (författare)
  • Elastic Gradient Descent, an Iterative Optimization Method Approximating the Solution Paths of the Elastic Net
  • 2023
  • Ingår i: Journal of Machine Learning Research. - 1533-7928 .- 1532-4435. ; 24, s. 1-35
  • Tidskriftsartikel (refereegranskat)abstract
    • The elastic net combines lasso and ridge regression to fuse the sparsity property of lasso with the grouping property of ridge regression. The connections between ridge regression and gradient descent and between lasso and forward stagewise regression have previously been shown. Similar to how the elastic net generalizes lasso and ridge regression, we introduce elastic gradient descent, a generalization of gradient descent and forward stagewise regression. We theoretically analyze elastic gradient descent and compare it to the elastic net and forward stagewise regression. Parts of the analysis are based on elastic gradient flow, a piecewise analytical construction, obtained for elastic gradient descent with infinitesimal step size. We also compare elastic gradient descent to the elastic net on real and simulated data and show that it provides similar solution paths, but is several orders of magnitude faster. Compared to forward stagewise regression, elastic gradient descent selects a model that, although still sparse, provides considerably lower prediction and estimation errors.
  •  
3.
  • Allerbo, Oskar, 1985, et al. (författare)
  • Flexible, non-parametric modeling using regularized neural networks
  • 2022
  • Ingår i: Computational Statistics. - : Springer Science and Business Media LLC. - 0943-4062 .- 1613-9658. ; 37:4, s. 2029-2047
  • Tidskriftsartikel (refereegranskat)abstract
    • Non-parametric, additive models are able to capture complex data dependencies in a flexible, yet interpretable way. However, choosing the format of the additive components often requires non-trivial data exploration. Here, as an alternative, we propose PrAda-net, a one-hidden-layer neural network, trained with proximal gradient descent and adaptive lasso. PrAda-net automatically adjusts the size and architecture of the neural network to reflect the complexity and structure of the data. The compact network obtained by PrAda-net can be translated to additive model components, making it suitable for non-parametric statistical modelling with automatic model selection. We demonstrate PrAda-net on simulated data, where we compare the test error performance, variable importance and variable subset identification properties of PrAda-net to other lasso-based regularization approaches for neural networks. We also apply PrAda-net to the massive U.K. black smoke data set, to demonstrate how PrAda-net can be used to model complex and heterogeneous data with spatial and temporal components. In contrast to classical, statistical non-parametric approaches, PrAda-net requires no preliminary modeling to select the functional forms of the additive components, yet still results in an interpretable model representation. © 2021, The Author(s).
  •  
4.
  •  
5.
  • Allerbo, Oskar, 1985, et al. (författare)
  • Non-linear, sparse dimensionality reduction via path lasso penalized autoencoders
  • 2021
  • Ingår i: Journal of Machine Learning Research. - : Microtome Publishing. - 1532-4435 .- 1533-7928. ; 22
  • Tidskriftsartikel (refereegranskat)abstract
    • High-dimensional data sets are often analyzed and explored via the construction of a latent low-dimensional space which enables convenient visualization and efficient predictive modeling or clustering. For complex data structures, linear dimensionality reduction techniques like PCA may not be sufficiently flexible to enable low-dimensional representation. Non-linear dimension reduction techniques, like kernel PCA and autoencoders, suffer from loss of interpretability since each latent variable is dependent of all input dimensions. To address this limitation, we here present path lasso penalized autoencoders. This structured regularization enhances interpretability by penalizing each path through the encoder from an input to a latent variable, thus restricting how many input variables are represented in each latent dimension. Our algorithm uses a group lasso penalty and non-negative matrix factorization to construct a sparse, non-linear latent representation. We compare the path lasso regularized autoencoder to PCA, sparse PCA, autoencoders and sparse autoencoders on real and simulated data sets. We show that the algorithm exhibits much lower reconstruction errors than sparse PCA and parameter-wise lasso regularized autoencoders for low-dimensional representations. Moreover, path lasso representations provide a more accurate reconstruction match, i.e. preserved relative distance between objects in the original and reconstructed spaces. ©2021 Oskar Allerbo and Rebecka Jörnsten.
  •  
6.
  • Allerbo, Oskar, 1985, et al. (författare)
  • Simulations of lipid vesicle rupture induced by an adjacent supported lipid bilayer patch
  • 2011
  • Ingår i: Colloids and Surfaces B: Biointerfaces. - : Elsevier BV. - 0927-7765 .- 1873-4367. ; 82:2, s. 632-636
  • Tidskriftsartikel (refereegranskat)abstract
    • Using a simple phenomenological model of a lipid bilayer and a surface, simulations were performed to study the bilayer-induced vesicle rupture probability as a vesicle adsorbs adjacently to a bilayer patch already adsorbed on the surface. The vesicle rupture probability was studied as a function of temperature, vesicle size, and surface-bilayer interaction strength. From the simulation data, estimates of the apparent activation energy for bilayer-induced vesicle rupture were calculated, both for different vesicle sizes and for different surface-bilayer interaction strengths.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy