SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Lindsten Fredrik 1984 ) "

Sökning: WFRF:(Lindsten Fredrik 1984 )

  • Resultat 1-10 av 30
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Lindholm, Andreas, et al. (författare)
  • Machine learning : a first course for engineers and scientists
  • 2022
  • Bok (övrigt vetenskapligt/konstnärligt)abstract
    • This book introduces machine learning for readers with some background in basic linear algebra, statistics, probability, and programming. In a coherent statistical framework it covers a selection of supervised machine learning methods, from the most fundamental (k-NN, decision trees, linear and logistic regression) to more advanced methods (deep neural networks, support vector machines, Gaussian processes, random forests and boosting), plus commonly-used unsupervised methods (generative modeling, k-means, PCA, autoencoders and generative adversarial networks). Careful explanations and pseudo-code are presented for all methods. The authors maintain a focus on the fundamentals by drawing connections between methods and discussing general concepts such as loss functions, maximum likelihood, the bias-variance decomposition, ensemble averaging, kernels and the Bayesian approach along with generally useful tools such as regularization, cross validation, evaluation metrics and optimization methods. The final chapters offer practical advice for solving real-world supervised machine learning problems and on ethical aspects of modern machine learning
  •  
3.
  • Lindsten, Fredrik, 1984- (författare)
  • Particle filters and Markov chains for learning of dynamical systems
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.
  •  
4.
  • Özkan, Emre, et al. (författare)
  • Recursive Maximum Likelihood Identification of Jump Markov Nonlinear Systems
  • 2015
  • Ingår i: IEEE Transactions on Signal Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-587X .- 1941-0476. ; 63:3, s. 754-765
  • Tidskriftsartikel (refereegranskat)abstract
    • We present an online method for joint state and parameter estimation in jump Markov non-linear systems (JMNLS). State inference is enabled via the use of particle filters which makes the method applicable to a wide range of non-linear models. To exploit the inherent structure of JMNLS, we design a Rao-Blackwellized particle filter (RBPF) where the discrete mode is marginalized out analytically. This results in an efficient implementation of the algorithm and reduces the estimation error variance. The proposed RBPF is then used to compute, recursively in time, smoothed estimates of complete data sufficient statistics. Together with the online expectation maximization algorithm, this enables recursive identification of unknown model parameters including the transition probability matrix. The method is also applicable to online identification of jump Markov linear systems(JMLS). The performance of the method is illustrated in simulations and on a localization problem in wireless networks using real data.
  •  
5.
  • Ahmadian, Amirhossein, 1992-, et al. (författare)
  • Enhancing Representation Learning with Deep Classifiers in Presence of Shortcut
  • 2023
  • Ingår i: Proceedings of IEEE ICASSP 2023.
  • Konferensbidrag (refereegranskat)abstract
    • A deep neural classifier trained on an upstream task can be leveraged to boost the performance of another classifier in a related downstream task through the representations learned in hidden layers. However, presence of shortcuts (easy-to-learn features) in the upstream task can considerably impair the versatility of intermediate representations and, in turn, the downstream performance. In this paper, we propose a method to improve the representations learned by deep neural image classifiers in spite of a shortcut in upstream data. In our method, the upstream classification objective is augmented with a type of adversarial training where an auxiliary network, so called lens, fools the classifier by exploiting the shortcut in reconstructing images. Empirical comparisons in self-supervised and transfer learning problems with three shortcut-biased datasets suggest the advantages of our method in terms of downstream performance and/or training time.
  •  
6.
  • Ahmadian, Amirhossein, 1992-, et al. (författare)
  • Likelihood-free Out-of-Distribution Detection with Invertible Generative Models
  • 2021
  • Ingår i: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021). - California : International Joint Conferences on Artificial Intelligence Organization.
  • Konferensbidrag (refereegranskat)abstract
    • Likelihood of generative models has been used traditionally as a score to detect atypical (Out-of-Distribution, OOD) inputs. However, several recent studies have found this approach to be highly unreliable, even with invertible generative models, where computing the likelihood is feasible. In this paper, we present a different framework for generative model--based OOD detection that employs the model in constructing a new representation space, instead of using it directly in computing typicality scores, where it is emphasized that the score function should be interpretable as the similarity between the input and training data in the new space. In practice, with a focus on invertible models, we propose to extract low-dimensional features (statistics) based on the model encoder and complexity of input images, and then use a One-Class SVM to score the data. Contrary to recently proposed OOD detection methods for generative models, our method does not require computing likelihood values. Consequently, it is much faster when using invertible models with iteratively approximated likelihood (e.g. iResNet), while it still has a performance competitive with other related methods
  •  
7.
  • Andersson Naesseth, Christian, 1986- (författare)
  • Machine learning using approximate inference : Variational and sequential Monte Carlo methods
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubiquitous in our everyday life. The systems we design, and technology we develop, requires us to coherently represent and work with uncertainty in data. Probabilistic models and probabilistic inference gives us a powerful framework for solving this problem. Using this framework, while enticing, results in difficult-to-compute integrals and probabilities when conditioning on the observed data. This means we have a need for approximate inference, methods that solves the problem approximately using a systematic approach. In this thesis we develop new methods for efficient approximate inference in probabilistic models.There are generally two approaches to approximate inference, variational methods and Monte Carlo methods. In Monte Carlo methods we use a large number of random samples to approximate the integral of interest. With variational methods, on the other hand, we turn the integration problem into that of an optimization problem. We develop algorithms of both types and bridge the gap between them.First, we present a self-contained tutorial to the popular sequential Monte Carlo (SMC) class of methods. Next, we propose new algorithms and applications based on SMC for approximate inference in probabilistic graphical models. We derive nested sequential Monte Carlo, a new algorithm particularly well suited for inference in a large class of high-dimensional probabilistic models. Then, inspired by similar ideas we derive interacting particle Markov chain Monte Carlo to make use of parallelization to speed up approximate inference for universal probabilistic programming languages. After that, we show how we can make use of the rejection sampling process when generating gamma distributed random variables to speed up variational inference. Finally, we bridge the gap between SMC and variational methods by developing variational sequential Monte Carlo, a new flexible family of variational approximations.
  •  
8.
  • Ekström Kelvinius, Filip, et al. (författare)
  • Discriminator Guidance for Autoregressive Diffusion Models
  • 2024
  • Ingår i: Proceedings of The 27th International Conference on Artificial Intelligence and Statistics. - : PMLR. ; , s. 3403-3411
  • Konferensbidrag (refereegranskat)abstract
    • We introduce discriminator guidance in the setting of Autoregressive Diffusion Models. The use of a discriminator to guide a diffusion process has previously been used for continuous diffusion models, and in this work we derive ways of using a discriminator together with a pretrained generative model in the discrete case. First, we show that using an optimal discriminator will correct the pretrained model and enable exact sampling from the underlying data distribution. Second, to account for the realistic scenario of using a sub-optimal discriminator, we derive a sequential Monte Carlo algorithm which iteratively takes the predictions from the discriminator into account during the generation process. We test these approaches on the task of generating molecular graphs and show how the discriminator improves the generative performance over using only the pretrained model.
  •  
9.
  • Glaser, Pierre, et al. (författare)
  • Fast and Scalable Score-Based Kernel Calibration Tests
  • 2023
  • Ingår i: Thirty-Ninth Conference on Uncertainty in Artificial Intelligence.
  • Konferensbidrag (refereegranskat)abstract
    • We introduce the Kernel Calibration Conditional Stein Discrepancy test (KCCSD test), a non-parametric, kernel-based test for assessing the calibration of probabilistic models with well-defined scores. In contrast to previous methods, our test avoids the need for possibly expensive expectation approximations while providing control over its type-I error. We achieve these improvements by using a new family of kernels for score-based probabilities that can be estimated without probability density samples, and by using a conditional goodness-of-fit criterion for the KCCSD test’s U-statistic. The tractability of the KCCSD test widens the surface area of calibration measures to new promising use-cases, such as regularization during model training. We demonstrate the properties of our test on various synthetic settings.
  •  
10.
  • Govindarajan, Hariprasath, et al. (författare)
  • DINO as a von Mises-Fisher mixture model
  • 2023
  • Ingår i: The Eleventh International Conference on Learning Representations.
  • Konferensbidrag (refereegranskat)abstract
    • Self-distillation methods using Siamese networks are popular for self-supervised pre-training. DINO is one such method based on a cross-entropy loss between K-dimensional probability vectors, obtained by applying a softmax function to the dot product between representations and learnt prototypes. Given the fact that the learned representations are L2-normalized, we show that DINO and its derivatives, such as iBOT, can be interpreted as a mixture model of von Mises-Fisher components. With this interpretation, DINO assumes equal precision for all components when the prototypes are also L2-normalized. Using this insight we propose DINO-vMF, that adds appropriate normalization constants when computing the cluster assignment probabilities. Unlike DINO, DINO-vMF is stable also for the larger ViT-Base model with unnormalized prototypes. We show that the added flexibility of the mixture model is beneficial in terms of better image representations. The DINO-vMF pre-trained model consistently performs better than DINO on a range of downstream tasks. We obtain similar improvements for iBOT-vMF vs iBOT and thereby show the relevance of our proposed modification also for other methods derived from DINO.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 30
Typ av publikation
konferensbidrag (21)
tidskriftsartikel (4)
doktorsavhandling (2)
rapport (1)
bok (1)
licentiatavhandling (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (25)
övrigt vetenskapligt/konstnärligt (5)
Författare/redaktör
Lindsten, Fredrik, 1 ... (28)
Schön, Thomas B. (5)
Olmin, Amanda, 1994- (5)
Ahmadian, Amirhossei ... (3)
Sidén, Per, 1987- (2)
Schön, Thomas B., Pr ... (2)
visa fler...
Svensson, Andreas (2)
Andersson Naesseth, ... (2)
Andersson Naesseth, ... (2)
Svensson, Lennart, 1 ... (2)
Roll, Jacob (2)
Widmann, David (2)
Naesseth, Christian ... (2)
Lindqvist, Jakob, 19 ... (2)
Svensson, Lennart, 1 ... (1)
Gustafsson, Fredrik (1)
Ding, Yifan (1)
Eilertsen, Gabriel, ... (1)
Svensson, Lennart (1)
Doucet, Arnaud (1)
Andersson, Carl (1)
Schön, Thomas B., Pr ... (1)
Wahlström, Niklas, 1 ... (1)
Ljung, Lennart, Prof ... (1)
Schön, Thomas, Profe ... (1)
Lindsten, Fredrik, S ... (1)
Murray, Iain, Profes ... (1)
Özkan, Emre (1)
Gustafsson, Fredrik, ... (1)
Fritsche, Carsten (1)
Sumpter, David J. T. (1)
Rönnberg, Elina, 198 ... (1)
Karlsson, Emil, 1990 ... (1)
Dahlin, Johan, 1986- (1)
Schön, Thomas Bo (1)
Dai, Liang (1)
Vaicenavicius, Juoza ... (1)
Ekström Kelvinius, F ... (1)
Zimmermann, Heiko (1)
Glaser, Pierre (1)
Gretton, Arthur (1)
Govindarajan, Haripr ... (1)
Sidén, Per (1)
Kirkpatrick, B (1)
Raidl, Günther R. (1)
van de Meent, Jan-Wi ... (1)
Schön, Thomas B., 19 ... (1)
Lindholm, Andreas (1)
Olsson, Jimmy, 1977- (1)
Johansen, A. M. (1)
visa färre...
Lärosäte
Linköpings universitet (30)
Uppsala universitet (6)
Chalmers tekniska högskola (2)
Språk
Engelska (30)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (25)
Teknik (12)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy