SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Hössjer Ola) "

Search: WFRF:(Hössjer Ola)

  • Result 1-10 of 108
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Anevski, Dragi, 1965, et al. (author)
  • A general asymptotic scheme for inference under order restrictions
  • 2006
  • In: Annals of Statistics. - : Institute of Mathematical Statistics. - 0090-5364. ; 34:4, s. 1874-1930
  • Journal article (peer-reviewed)abstract
    • Limit distributions for the greatest convex minorant and its derivative are considered for a general class of stochastic processes including partial sum processes and empirical processes, for independent, weakly dependent and long range dependent data. The results are applied to isotonic regression, isotonic regression after kernel smoothing, estimation of convex regression functions, and estimation of monotone and convex density functions. Various pointwise limit distributions are obtained, and the rate of convergence depends on the self similarity properties and on the rate of convergence of the processes considered. © Institute of Mathematical Statistics, 2006.
  •  
2.
  • Anevski, Dragi, et al. (author)
  • Monotone regression and density function estimation at a point of discontinuity
  • 2002
  • In: Journal of Nonparametric Statistics. - : Informa UK Limited. - 1048-5252 .- 1029-0311. ; 14:3, s. 279-294
  • Journal article (peer-reviewed)abstract
    • Pointwise limit distribution results are given for the isotonic regression estimator at a point of discontinuity. The cases treated are independent data, phi- and alpha-mixing data and subordinated Gaussian long range dependent data. Pointwise limit results for the nonparametric maximum likelihood estimator of a monotone density are given at a point of discontinuity, for independent data. The limit distributions are non-standard and differ from the ones obtained for differentiable regression and density functions.
  •  
3.
  • Bengtsson, Henrik, et al. (author)
  • Methodological study of affine transformations of gene expression data with proposed robust non-parametric multi-dimensional normalization method
  • 2006
  • In: BMC Bioinformatics. - : Springer Science and Business Media LLC. - 1471-2105. ; 7
  • Journal article (peer-reviewed)abstract
    • Background: Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. Results: A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. Conclusion: We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.
  •  
4.
  • Berglund, Daniel (author)
  • Models for Additive and Sufficient Cause Interaction
  • 2019
  • Licentiate thesis (other academic/artistic)abstract
    • The aim of this thesis is to develop and explore models in, and related to, the sufficient cause framework, and additive interaction. Additive interaction is closely connected with public health interventions and can be used to make inferences about the sufficient causes in order to find the mechanisms behind an outcome, for instance a disease.In paper A we extend the additive interaction, and interventions, to include continuous exposures. We show that there does not exist a model that does not lead to inconsistent conclusions about the interaction.The sufficient cause framework can also be expressed using Boolean functions, which is expanded upon in paper B. In this paper we define a new model based on the multifactor potential outcome model (MFPO) and independence of causal influence models (ICI).In paper C we discuss the modeling and estimation of additive interaction in relation to if the exposures are harmful or protective conditioned on some other exposure. If there is uncertainty about the effects direction there can be errors in the testing of the interaction effect.
  •  
5.
  • Björkwall, Susanna, et al. (author)
  • A generalized linear model with smoothing effects for claims reserving
  • 2011
  • In: Insurance, Mathematics & Economics. - : Elsevier BV. - 0167-6687 .- 1873-5959. ; 49:1, s. 27-37
  • Journal article (peer-reviewed)abstract
    • In this paper, we continue the development of the ideas introduced in England and Verrall (2001) by suggesting the use of a reparameterized version of the generalized linear model (GLM) which is frequently used in stochastic claims reserving. This model enables us to smooth the origin, development and calendar year parameters in a similar way as is often done in practice, but still keep the GLM structure. Specifically, we use this model structure in order to obtain reserve estimates and to systemize the model selection procedure that arises in the smoothing process. Moreover, we provide a bootstrap procedure to achieve a full predictive distribution.
  •  
6.
  • Björkwall, Susanna, et al. (author)
  • Bootstrapping the separation method in claims reserving.
  • 2010
  • In: Astin Bulletin. - 0515-0361 .- 1783-1350. ; 40:2, s. 845-869
  • Journal article (peer-reviewed)abstract
    • The separation method was introduced by Verbeek (1972) in order to forecast numbers of excess claims and it was developed further by Taylor (1977) to be applicable to the average claim cost.The separation method differs from the chain-ladder in that when the chain-ladder only assumes claim proportionality between the development years, the separation method also separates the claim delay distribution from influences affecting the calendar years, e.g. inflation. Since the inflation contributes to the uncertainty in the estimate of the claims reserve it is important to consider its impact in the context of risk management, too. In this paper we present a method for assessing the prediction error distribution of the separation method. To this end we introduce a parametric framework within the separation model which enables joint resampling of claim counts and claim amounts. As a result, the variability of Taylor's predicted reserves can be assessed by extending the parametric bootstrap techniques of Björkwall et al. (2009). The performance of the bootstrapped separation method and chain-ladder is compared for a real data set.
  •  
7.
  • Björkwall, Susanna, 1976-, et al. (author)
  • Non-parametric and parametric bootstrap techniques for age-to-age development factor methods in stochastic claims reserving.
  • 2009
  • In: Scandinavian Actuarial Journal. - : Informa UK Limited. - 0346-1238 .- 1651-2030. ; :4, s. 306-331
  • Journal article (peer-reviewed)abstract
    • In the literature, one of the main objects of stochastic claims reserving is to find models underlying the chain-ladder method in order to analyze the variability of the outstanding claims, either analytically or by bootstrapping. In bootstrapping these models are used to find a full predictive distribution of the claims reserve, even though there is a long tradition of actuaries calculating the reserve estimate according to more complex algorithms than the chain-ladder, without explicit reference to an underlying model. In this paper we investigate existing bootstrap techniques and suggest two alternative bootstrap procedures, one non-parametric and one parametric, by which the predictive distribution of the claims reserve can be found for other age-to-age development factor methods than the chain-ladder, using some rather mild model assumptions. For illustration, the procedures are applied to three different development triangles.
  •  
8.
  • Björkwall, Susanna, 1976- (author)
  • Stochastic claims reserving in non-life insurance : Bootstrap and smoothing models
  • 2011
  • Doctoral thesis (other academic/artistic)abstract
    • In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis. However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself. The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.
  •  
9.
  • Diaz-Pachón, Daniel Andrés, et al. (author)
  • Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
  • 2022
  • In: Entropy. - : MDPI AG. - 1099-4300. ; 24:10
  • Journal article (peer-reviewed)abstract
    • A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in order to reach a certain target. A function f quantifies specificity for each possible outcome x of a search, so that the target of the algorithm is a set of highly specified states, whereas fine-tuning occurs if it is much more likely for the algorithm to reach the target as intended than by chance. The distribution of a random outcome X of the algorithm involves a parameter θ that quantifies how much background information has been infused. A simple choice of this parameter is to use θf in order to exponentially tilt the distribution of the outcome of the search algorithm under the null distribution of no tuning, so that an exponential family of distributions is obtained. Such algorithms are obtained by iterating a Metropolis–Hastings type of Markov chain, which makes it possible to compute their active information under the equilibrium and non-equilibrium of the Markov chain, with or without stopping when the targeted set of fine-tuned states has been reached. Other choices of tuning parameters θ are discussed as well. Nonparametric and parametric estimators of active information and tests of fine-tuning are developed when repeated and independent outcomes of the algorithm are available. The theory is illustrated with examples from cosmology, student learning, reinforcement learning, a Moran type model of population genetics, and evolutionary programming.
  •  
10.
  • Díaz-Pachón, Daniel Andrés, et al. (author)
  • Is cosmological tuning fine or coarse?
  • 2021
  • In: Journal of Cosmology and Astroparticle Physics. - : IOP Publishing. - 1475-7516. ; :7
  • Journal article (peer-reviewed)abstract
    • The fine-tuning of the universe for life, the idea that the constants of nature (or ratios between them) must belong to very small intervals in order for life to exist, has been debated by scientists for several decades. Several criticisms have emerged concerning probabilistic measurement of life-permitting intervals. Herein, a Bayesian statistical approach is used to assign an upper bound for the probability of tuning, which is invariant with respect to change of physical units, and under certain assumptions it is small whenever the life-permitting interval is small on a relative scale. The computation of the upper bound of the tuning probability is achieved by first assuming that the prior is chosen by the principle of maximum entropy (MaxEnt). The unknown parameters of this MaxEnt distribution are then handled in such a way that the weak anthropic principle is not violated. The MaxEnt assumption is maximally noncommittal with regard to missing information. This approach is sufficiently general to be applied to constants of current cosmological models, or to other constants possibly under different models. Application of the MaxEnt model reveals, for example, that the ratio of the universal gravitational constant to the square of the Hubble constant is finely tuned in some cases, whereas the amplitude of primordial fluctuations is not.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 108
Type of publication
journal article (78)
reports (8)
other publication (6)
doctoral thesis (6)
licentiate thesis (6)
book chapter (3)
show more...
research review (1)
show less...
Type of content
peer-reviewed (82)
other academic/artistic (22)
pop. science, debate, etc. (4)
Author/Editor
Hössjer, Ola (75)
Hössjer, Ola, 1964- (19)
Ryman, Nils (13)
Laikre, Linda (10)
Hössjer, Ola, Profes ... (10)
Hedström, Anna Karin (8)
show more...
Olsson, Fredrik (7)
Kockum, Ingrid (6)
Diaz-Pachón, Daniel ... (6)
Karlsson, Måns, 1988 ... (6)
Alfredsson, Lars (5)
Olsson, Tomas (4)
Humphreys, Keith (4)
Holst, Ulla (4)
Björkwall, Susanna (4)
Grünewald, Maria, 19 ... (4)
Werner Hartman, Lind ... (3)
Trolle Lagerros, Ylv ... (3)
Petersson, Mikael (3)
Ryman, Nils, 1943- (3)
Allendorf, Fred W (3)
Laikre, Linda, 1960- (3)
Frigyesi, Attila (3)
Ohlsson, Esbjörn (3)
Verrall, Richard (3)
Sjölander, Arvid (3)
Silvestrov, Dmitrii, ... (3)
Ekheden, Erland, 198 ... (3)
Tyvand, Peder A. (3)
Lekman, Magnus (3)
Karlsson, Sten (2)
Groop, Leif (2)
Silvestrov, Dmitrii, ... (2)
Kutschera, Verena E. (2)
Ye, Weimin (2)
Ekman, Diana (2)
Åkerstedt, Torbjörn (2)
Kurbasic, Azra (2)
Hillert, Jan (2)
Bellocco, Rino (2)
Jorde, Per Erik (2)
Ängquist, Lars (2)
Hartman, Linda (2)
Katsoulis, Michail (2)
Björkwall, Susanna, ... (2)
Marks II, Robert J. (2)
Ruppert, David (2)
Malmberg, Hannes (2)
Kardos, Marty (2)
Rao, J. Sunil (2)
show less...
University
Stockholm University (89)
Lund University (15)
Karolinska Institutet (15)
Royal Institute of Technology (3)
University of Gothenburg (1)
Uppsala University (1)
show more...
Halmstad University (1)
Mälardalen University (1)
Linköping University (1)
Chalmers University of Technology (1)
VTI - The Swedish National Road and Transport Research Institute (1)
show less...
Language
English (108)
Research subject (UKÄ/SCB)
Natural sciences (90)
Medical and Health Sciences (12)
Social Sciences (3)
Humanities (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view