SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Lindsten Fredrik 1984 ) srt2:(2010-2014)"

Search: WFRF:(Lindsten Fredrik 1984 ) > (2010-2014)

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Lindsten, Fredrik, 1984- (author)
  • Particle filters and Markov chains for learning of dynamical systems
  • 2013
  • Doctoral thesis (other academic/artistic)abstract
    • Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.
  •  
2.
  • Lindsten, Fredrik, 1984-, et al. (author)
  • An Explicit Variance Reduction Expression for the Rao-Blackwellised Particle Filter
  • 2010
  • Reports (other academic/artistic)abstract
    • Particle filters (PFs) have shown to be very potent tools for state estimation in nonlinear and/or non-Gaussian state-space models. For certain models, containing a conditionally tractable substructure (typically conditionally linear Gaussian or with finite support), it is possible to exploit this structure in order to obtain more accurate estimates. This has become known as Rao-Blackwellised particle filtering (RBPF). However, since the RBPF is typically more computationally demanding than the standard PF per particle, it is not always beneficial to resort to Rao-Blackwellisation. For the same computational effort, a standard PF with an increased number of particles, which would also increase the accuracy, could be used instead. In this paper, we have analysed the asymptotic variance of the RBPF and provide an explicit expression for the obtained variance reduction. This expression could be used to make an efficient discrimination of when to apply Rao-Blackwellisation, and when not to.
  •  
3.
  • Svensson, Andreas, et al. (author)
  • Identification of jump Markov linear models using particle filters
  • 2014
  • In: Proceedings of the  53rd IEEE Conference on Decision and Control. - Piscataway, NJ : IEEE. - 9781467360906 - 9781479977468 - 9781479977451 ; , s. 6504-6509
  • Conference paper (peer-reviewed)abstract
    • Jump Markov linear models consists of a finite number of linear state space models and a discrete variable encoding the jumps (or switches) between the different linear models. Identifying jump Markov linear models makes for a challenging problem lacking an analytical solution. We derive a new expectation maximization (EM) type algorithm that produce maximum likelihood estimates of the model parameters. Our development hinges upon recent progress in combining particle filters with Markov chain Monte Carlo methods in solving the nonlinear state smoothing problem inherent in the EM formulation. Key to our development is that we exploit a conditionally linear Gaussian substructure in the model, allowing for an efficient algorithm.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view