SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:0960 3174 OR L773:1573 1375 "

Search: L773:0960 3174 OR L773:1573 1375

  • Result 1-10 of 23
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Andersson, Claes, 1987, et al. (author)
  • Inference for cluster point processes with over- or under-dispersed cluster sizes
  • 2020
  • In: Statistics and Computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 30, s. 1573-1590
  • Journal article (peer-reviewed)abstract
    • Cluster point processes comprise a class of models that have been used for a wide range of applications. While several models have been studied for the probability density function of the offspring displacements and the parent point process, there are few examples of non-Poisson distributed cluster sizes. In this paper, we introduce a generalization of the Thomas process, which allows for the cluster sizes to have a variance that is greater or less than the expected value. We refer to this as the cluster sizes being over- and under-dispersed, respectively. To fit the model, we introduce minimum contrast methods and a Bayesian MCMC algorithm. These are evaluated in a simulation study. It is found that using the Bayesian MCMC method, we are in most cases able to detect over- and under-dispersion in the cluster sizes. We use the MCMC method to fit the model to nerve fiber data, and contrast the results to those of a fitted Thomas process.
  •  
2.
  • Bierkens, Joris, et al. (author)
  • A piecewise deterministic Monte Carlo method for diffusion bridges
  • 2021
  • In: Statistics and Computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 31:3
  • Journal article (peer-reviewed)abstract
    • We introduce the use of the Zig-Zag sampler to the problem of sampling conditional diffusion processes (diffusion bridges). The Zig-Zag sampler is a rejection-free sampling scheme based on a non-reversible continuous piecewise deterministic Markov process. Similar to the Lévy–Ciesielski construction of a Brownian motion, we expand the diffusion path in a truncated Faber–Schauder basis. The coefficients within the basis are sampled using a Zig-Zag sampler. A key innovation is the use of the fully local algorithm for the Zig-Zag sampler that allows to exploit the sparsity structure implied by the dependency graph of the coefficients and by the subsampling technique to reduce the complexity of the algorithm. We illustrate the performance of the proposed methods in a number of examples.
  •  
3.
  • Bierkens, Joris, et al. (author)
  • Sticky PDMP samplers for sparse and local inference problems
  • 2023
  • In: Statistics and Computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 33
  • Journal article (peer-reviewed)abstract
    • We construct a new class of efficient Monte Carlo methods based on continuous-time piecewise deterministic Markov processes (PDMPs) suitable for inference in high dimensional sparse models, i.e. models for which there is prior knowledge that many coordinates are likely to be exactly 0. This is achieved with the fairly simple idea of endowing existing PDMP samplers with “sticky” coordinate axes, coordinate planes etc. Upon hitting those subspaces, an event is triggered during which the process sticks to the subspace, this way spending some time in a sub-model. This results in non-reversible jumps between different (sub-)models. While we show that PDMP samplers in general can be made sticky, we mainly focus on the Zig-Zag sampler. Compared to the Gibbs sampler for variable selection, we heuristically derive favourable dependence of the Sticky Zig-Zag sampler on dimension and data size. The computational efficiency of the Sticky Zig-Zag sampler is further established through numerical experiments where both the sample size and the dimension of the parameter space are large.
  •  
4.
  • Corander, Jukka, et al. (author)
  • Bayesian model learning based on a parallel MCMC strategy
  • 2006
  • In: Statistics and computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 16:4, s. 355-362
  • Journal article (peer-reviewed)abstract
    • We introduce a novel Markov chain Monte Carlo algorithm for estimation of posterior probabilities over discrete model spaces. Our learning approach is applicable to families of models for which the marginal likelihood can be analytically calculated, either exactly or approximately, given any fixed structure. It is argued that for certain model neighborhood structures, the ordinary reversible Metropolis-Hastings algorithm does not yield an appropriate solution to the estimation problem. Therefore, we develop an alternative, non-reversible algorithm which can avoid the scaling effect of the neighborhood. To efficiently explore a model space, a finite number of interacting parallel stochastic processes is utilized. Our interaction scheme enables exploration of several local neighborhoods of a model space simultaneously, while it prevents the absorption of any particular process to a relatively inferior state. We illustrate the advantages of our method by an application to a classification model. In particular, we use an extensive bacterial database and compare our results with results obtained by different methods for the same data.
  •  
5.
  • Corander, Jukka, 1965-, et al. (author)
  • Have I seen you before? : Principles of Bayesian predictive classification revisited
  • 2013
  • In: Statistics and computing. - : Springer Berlin/Heidelberg. - 0960-3174 .- 1573-1375. ; 23:1, s. 59-73
  • Journal article (peer-reviewed)abstract
    • A general inductive Bayesian classification framework is considered using a simultaneous predictive distribution for test items. We introduce a principle of generative supervised and semi-supervised classification based on marginalizing the joint posterior distribution of labels for all test items. The simultaneous and marginalized classifiers arise under different loss functions, while both acknowledge jointly all uncertainty about the labels of test items and the generating probability measures of the classes. We illustrate for data from multiple finite alphabets that such classifiers achieve higher correct classification rates than a standard marginal predictive classifier which labels all test items independently, when training data are sparse. In the supervised case for multiple finite alphabets the simultaneous and the marginal classifiers are proven to become equal under generalized exchangeability when the amount of training data increases. Hence, the marginal classifier can be interpreted as an asymptotic approximation to the simultaneous classifier for finite sets of training data. It is also shown that such convergence is not guaranteed in the semi-supervised setting, where the marginal classifier does not provide a consistent approximation.
  •  
6.
  • Cornebise, Julien, et al. (author)
  • Adaptive methods for sequential importance sampling with application to state space models
  • 2008
  • In: Statistics and Computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 18:4, s. 461-480
  • Journal article (peer-reviewed)abstract
    • In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms—also known as particle filters—relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the quality of the estimates. Thus, we show how the long-used coefficient of variation (suggested by Kong et al. in J. Am. Stat. Assoc. 89(278–288):590–599, 1994) of the weights can be used for estimating the chi-square distance between the target and instrumental distributions of the auxiliary particle filter. As a by-product of this analysis we obtain an auxiliary adjustment multiplier weight type for which this chi-square distance is minimal. Moreover, we establish an empirical estimate of linear complexity of the Kullback-Leibler divergence between the involved distributions. Guided by these results, we discuss adaptive designing of the particle filter proposal distribution and illustrate the methods on a numerical example.
  •  
7.
  • Cornebise, Julien, et al. (author)
  • Adaptive sequential Monte Carlo by means of mixture of experts
  • 2014
  • In: Statistics and Computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 24:3, s. 317-337
  • Journal article (peer-reviewed)abstract
    • Appropriately designing the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm adaptively approximating the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logistic weights. This family of distributions, referred to as mixtures of experts, is broad enough to be used in the presence of multi-modality or strongly skewed distributions. The mixtures are fitted, via online-EM methods, to the optimal kernel through minimisation of the Kullback-Leibler divergence between the auxiliary target and instrumental distributions of the particle filter. At each iteration of the particle filter, the algorithm is required to solve only a single optimisation problem for the whole particle sample, yielding an algorithm with only linear complexity. In addition, we illustrate in a simulation study how the method can be successfully applied to optimal filtering in nonlinear state-space models.
  •  
8.
  • Cronie, Ottmar, 1979, et al. (author)
  • Inhomogeneous higher-order summary statistics for point processes on linear networks
  • 2020
  • In: Statistics and computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 30, s. 1221-1239
  • Journal article (peer-reviewed)abstract
    • As a workaround for the lack of transitive transformations on linear network structures, which are required to consider different notions of distributional invariance, including stationarity, we introduce the notions of pseudostationarity and intensity reweighted moment pseudostationarity for point processes on linear networks. Moreover, using arbitrary so-called regular linear network distances, e.g. the Euclidean and the shortest-path distance, we further propose geometrically corrected versions of different higher-order summary statistics, including the inhomogeneous empty space function, the inhomogeneous nearest neighbour distance distribution function and the inhomogeneous J-function. Such summary statistics detect interactions of order higher than two. We also discuss their nonparametric estimators and through a simulation study, considering models with different types of spatial interaction and different networks, we study the performance of our proposed summary statistics by means of envelopes. Our summary statistic estimators manage to capture clustering, regularity as well as Poisson process independence. Finally, we make use of our new summary statistics to analyse two different datasets: motor vehicle traffic accidents and spiderwebs.
  •  
9.
  • Dahlin, Johan, 1986-, et al. (author)
  • Particle Metropolis-Hastings using gradient and Hessian information
  • 2015
  • In: Statistics and computing. - : Springer. - 0960-3174 .- 1573-1375. ; 25:1, s. 81-92
  • Journal article (peer-reviewed)abstract
    • Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in nonlinear state space models by combining MCMC and particle filtering. The latter is used to estimate the intractable likelihood. In its original formulation, PMH makes use of a marginal MCMC proposal for the parameters, typically a Gaussian random walk. However, this can lead to a poor exploration of the parameter space and an inefficient use of the generated particles.We propose two alternative versions of PMH that incorporate gradient and Hessian information about the posterior into the proposal. This information is more or less obtained as a byproduct of the likelihood estimation. Indeed, we show how to estimate the required information using a fixed-lag particle smoother, with a computational cost growing linearly in the number of particles. We conclude that the proposed methods can: (i) decrease the length of the burn-in phase, (ii) increase the mixing of the Markov chain at the stationary phase, and (iii) make the proposal distribution scale invariant which simplifies tuning.
  •  
10.
  • Douc, Randal, et al. (author)
  • On the use of Markov chain Monte Carlo methods for the sampling of mixture models : a statistical perspective
  • 2015
  • In: Statistics and computing. - : Springer Science and Business Media LLC. - 0960-3174 .- 1573-1375. ; 25:1, s. 95-110
  • Journal article (peer-reviewed)abstract
    • In this paper we study asymptotic properties of different data-augmentation-type Markov chain Monte Carlo algorithms sampling from mixture models comprising discrete as well as continuous random variables. Of particular interest to us is the situation where sampling from the conditional distribution of the continuous component given the discrete component is infeasible. In this context, we advance Carlin & Chib's pseudo-prior method as an alternative way of infering mixture models and discuss and compare different algorithms based on this scheme. We propose a novel algorithm, the Frozen Carlin & Chib sampler, which is computationally less demanding than any Metropolised Carlin & Chib-type algorithm. The significant gain of computational efficiency is however obtained at the cost of some asymptotic variance. The performance of the algorithm vis-A -vis alternative schemes is, using some recent results obtained in Maire et al. (Ann Stat 42: 1483-1510, 2014) for inhomogeneous Markov chains evolving alternatingly according to two different -reversible Markov transition kernels, investigated theoretically as well as numerically.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 23
Type of publication
journal article (23)
Type of content
peer-reviewed (21)
other academic/artistic (2)
Author/Editor
Olsson, Jimmy (6)
Schauer, Moritz, 198 ... (2)
Mateu, Jorge (2)
Bierkens, Joris (2)
Grazzi, Sebastiano (2)
Moulines, Eric (2)
show more...
Cornebise, Julien (2)
Bottai, M (1)
Rydén, Tobias (1)
Pavlenko, Tatjana (1)
Zuyev, Sergei, 1962 (1)
Lindsten, Fredrik (1)
Pya, Natalya (1)
Sahlin, Ullrika (1)
Gyllenberg, Mats (1)
Andersson, Claes, 19 ... (1)
Mrkvicka, T. (1)
Lindström, Erik (1)
Lee, Y (1)
Koski, Timo, 1952- (1)
van der Meulen, Fran ... (1)
Schön, Thomas, 1977- (1)
Sagitov, Serik, 1956 (1)
Cronie, Ottmar (1)
Cronie, Ottmar, 1979 (1)
Corander, Jukka (1)
Lee, W (1)
Meulen, Frank van de ... (1)
Bokrantz, Rasmus (1)
Geraci, M (1)
Koski, Timo (1)
Corander, Jukka, 196 ... (1)
Cui, Yao (1)
Sirén, Jukka (1)
Moradi, Mehdi (1)
Dahlin, Johan, 1986- (1)
Douc, Randal (1)
Maire, Florian (1)
Eftekhari, Armin (1)
Zygalakis, Konstanti ... (1)
Vargas, Luis (1)
Zhang, Tianfang (1)
Wood, Simon N. (1)
Lindo, Alexey, 1987 (1)
Maruotti, Antonello (1)
Mastrototaro, Alessa ... (1)
Moradi, M. Mehdi (1)
Rubak, Ege (1)
Lachieze-Rey, Raphae ... (1)
Baddeley, Adrian (1)
show less...
University
Royal Institute of Technology (7)
Chalmers University of Technology (6)
University of Gothenburg (5)
Lund University (5)
Umeå University (4)
Uppsala University (2)
show more...
Linköping University (2)
Karolinska Institutet (2)
Swedish University of Agricultural Sciences (1)
show less...
Language
English (23)
Research subject (UKÄ/SCB)
Natural sciences (20)
Engineering and Technology (6)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view