SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Helgesson Petter 1986 ) "

Sökning: WFRF:(Helgesson Petter 1986 )

  • Resultat 1-10 av 35
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Alhassan, Erwin, et al. (författare)
  • Reducing A Priori 239Pu Nuclear Data Uncertainty In The Keff Using A Set Of Criticality Benchmarks With Different Nuclear Data Libraries
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group for nuclear data uncertainty propagation, model calculations are compared with differential experimental data and a specific a priori uncertainty is assigned to each model parameter. By varying the model parameters all together within model parameter uncertainties, a full covariance matrix is obtained with its off diagonal elements if desired [1]. In this way, differential experimental data serve as a constraint for the model parameters used in the TALYS nuclear reactions code for the production of random nuclear data files. These files are processed into usable formats and used in transport codes for reactor calculations and for uncertainty propagation to reactor macroscopic parameters of interest. Even though differential experimental data together with their uncertainties are included (implicitly) in the production of these random nuclear data files in the TMC method, wide spreads in parameter distributions have been observed, leading to large uncertainties in reactor parameters for some nuclides for the European Lead cooled Training Reactor [2]. Due to safety concerns and the development of GEN-IV reactors with their challenging technological goals, the present uncertainties should be reduced significantly if the benefits from advances in modelling and simulations are to be utilized fully [3]. In Ref.[4], a binary accept/reject approach and a more rigorous method of assigning file weights based on the likelihood function were proposed and presented for reducing nuclear data uncertainties using a set of integral benchmarks obtained from the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP). These methods are depended on the reference nuclear data library used, the combined benchmark uncertainty and the relevance of each benchmark for reducing nuclear data uncertainties for a particular reactor system. Since each nuclear data library normally comes with its own nominal values and covariance matrices, reactor calculations and uncertainties computed with these libraries differ from library to library. In this work, we apply the binary accept/reject approach and the method of assigning file weights based on the likelihood function for reducing a priori 239Pu nuclear data uncertainties for the European Lead Cooled Training Reactor (ELECTRA) using a set of criticality benchmarks. Prior and posterior uncertainties computed for ELECTRA using ENDF/B-VII.1, JEFF-3.2 and JENDL-4.0 are compared after including experimental information from over 10 benchmarks.[1] A.J. Koning and D. Rochman, Modern Nuclear Data Evaluation with the TALYS Code System. Nuclear Data Sheets 113 (2012) 2841-2934. [2] E. Alhassan, H. Sjöstrand, P. Helgesson, A. J. Koning, M. Österlund, S. Pomp, D. Rochman, Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training reactor (ELECTRA). Annals of Nuclear Energy 75 (2015) 26-37. [3] G. Palmiotti, M. Salvatores, G. Aliberti, H. Hiruta, R. McKnight, P. Oblozinsky, W. Yang, A global approach to the physics validation of simulation codes for future nuclear systems, Annals of Nuclear Energy 36 (3) (2009) 355-361. [4] E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Österlund, D. Rochman, A.J. Koning, Selecting benchmarks for reactor calculations: In proc. PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future, kyoto, Japan, Sep. 28 - 3 Oct. (2014).
  •  
2.
  • Helgesson, Petter, 1986- (författare)
  • Approaching well-founded comprehensive nuclear data uncertainties : Fitting imperfect models to imperfect data
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Nuclear physics has a wide range of applications; e.g., low-carbon energy production, medical treatments, and non-proliferation of nuclear weapons. Nuclear data (ND) constitute necessary input to computations needed within all these applications.This thesis considers uncertainties in ND and their propagation to applications such as ma- terial damage in nuclear reactors. TENDL is today the most comprehensive library of evaluated ND (a combination of experimental ND and physical models), and it contains uncertainty estimates for all nuclides it contains; however, TENDL relies on an automatized process which, so far, includes a few practical remedies which are not statistically well-founded. A longterm goal of the thesis is to provide methods which make these comprehensive uncertainties well-founded. One of the main topics of the thesis is an automatic construction of experimental covariances; at first by attempting to complete the available uncertainty information using a set of simple rules. The thesis also investigates using the distribution of the data; this yields promising results, and the two approaches may be combined in future work.In one of the papers underlying the thesis, there are also manual analyses of experiments, for the thermal cross sections of Ni-59 (important for material damage). Based on this, uncertainty components in the experiments are sampled, resulting in a distribution of thermal cross sections. After being combined with other types of ND in a novel way, the distribution is propagated both to an application, and to an evaluated ND file, now part of the ND library JEFF 3.3.The thesis also compares a set of different techniques used to fit models in ND evaluation. For example, it is quantified how sensitive different techniques are to a model defect, i.e., the inability of the model to reproduce the truth underlying the data. All techniques are affected, but techniques fitting model parameters directly (such as the primary method used for TENDL) are more sensitive to model defects. There are also advantages with these methods, such as physical consistency and the possibility to build up a framework such as that of TENDL.The treatment of these model defects is another main topic of the thesis. To this end, two ways of using Gaussian processes (GPs) are studied, applied to quite different situations. First, the addition of a GP to the model is used to enable the fitting of arbitrarily shaped peaks in a histogram of data. This is shown to give a substantial improvement compared to if the peaks are assumed to be Gaussian (when they are not), both using synthetic and authentic data.The other approach uses GPs to fit smoothly energy-dependent model parameters in an ND evaluation context. Such an approach would be relatively easy to incorporate into the TENDL framework, and ensures a certain level of physical consistency. It is used on a TALYS-like model with synthetic data, and clearly outperforms fits without the energy-dependent model parameters, showing that the method can provide a viable route to improved ND evaluation. As a proof of concept, it is also used with authentic TALYS, and with authentic data.To conclude, the thesis takes significant steps towards well-founded comprehensive ND un- certainties.
  •  
3.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Assessment of Novel Techniques for Nuclear Data Evaluation
  • 2018
  • Ingår i: Reactor Dosimetry. - : ASTM International. - 9780803176614 ; , s. 105-116
  • Konferensbidrag (refereegranskat)abstract
    • The quality of evaluated nuclear data can be impacted by, e.g., the choice of the evaluation algorithm. The objective of this work is to compare the performance of the evaluation techniques GLS, GLS-P, UMC-G, and, UMC-B, by using synthetic data. In particular, the effects of model defects are investigated. For small model defects, UMC-B and GLS-P are found to perform best, while these techniques yield the worst results for a significantly defective model; in particular, they seriously underestimate the uncertainties. If UMC-B is augmented with Gaussian processes,it performs distinctly better for a defective model but is more susceptible to an inadequate experimental covariance estimate.
  •  
4.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Combining Total Monte Carlo and Unified Monte Carlo : Bayesian nuclear data uncertainty quantification from auto-generated experimental covariances
  • 2017
  • Ingår i: Progress in nuclear energy (New series). - : Elsevier. - 0149-1970 .- 1878-4224. ; 96, s. 76-96
  • Tidskriftsartikel (refereegranskat)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how TMC and Unified Monte Carlo-B (UMC-B) are combined to include experimental data in TMC. Random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.In this implementation, the impact from the weighting is small for many of the applications. In some cases, this can be explained by the fact that the distributions used as priors are too narrow to be valid as such. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, i.e., the region of interest is poorly resolved. This convergence issue can be due to the parameter distributions used as priors or model defects, for example.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however.To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
5.
  •  
6.
  • Helgesson, Petter, 1986- (författare)
  • Experimental data and Total Monte Carlo : Towards justified, transparent and complete nuclear data uncertainties
  • 2015
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The applications of nuclear physics are many with one important being nuclear power, which can help decelerating the climate change. In any of these applications, so-called nuclear data (ND, numerical representations of nuclear physics) is used in computations and simulations which are necessary for, e.g., design and maintenance. The ND is not perfectly known - there are uncertainties associated with it - and this thesis concerns the quantification and propagation of these uncertainties. In particular, methods are developed to include experimental data in the Total Monte Carlo methodology (TMC). The work goes in two directions. One is to include the experimental data by giving weights to the different "random files" used in TMC. This methodology is applied to practical cases using an automatic interpretation of an experimental database, including uncertainties and correlations. The weights are shown to give a consistent implementation of Bayes' theorem, such that the obtained uncertainty estimates in theory can be correct, given the experimental data. In the practical implementation, it is more complicated. This is much due to the interpretation of experimental data, but also because of model defects - the methodology assumes that there are parameter choices such that the model of the physics reproduces reality perfectly. This assumption is not valid, and in future work, model defects should be taken into account. Experimental data should also be used to give feedback to the distribution of the parameters, and not only to provide weights at a later stage.The other direction is based on the simulation of the experimental setup as a means to analyze the experiments in a structured way, and to obtain the full joint distribution of several different data points. In practice, this methodology has been applied to the thermal (n,α), (n,p), (n,γ) and (n,tot) cross sections of 59Ni. For example, the estimated expected value and standard deviation for the (n,α) cross section is (12.87 ± 0.72) b, which can be compared to the established value of (12.3 ± 0.6) b given in the work of Mughabghab. Note that also the correlations to the other thermal cross sections as well as other aspects of the distribution are obtained in this work - and this can be important when propagating the uncertainties. The careful evaluation of the thermal cross sections is complemented by a coarse analysis of the cross sections of 59Ni at other energies. The resulting nuclear data is used to study the propagation of the uncertainties through a model describing stainless steel in the spectrum of a thermal reactor. In particular, the helium production is studied. The distribution has a large uncertainty (a standard deviation of (17 ± 3) \%), and it shows a strong asymmetry. Much of the uncertainty and its shape can be attributed to the more coarse part of the uncertainty analysis, which, therefore, shall be refined in the future.
  •  
7.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example
  • 2017
  • Ingår i: Review of Scientific Instruments. - : AIP Publishing. - 0034-6748 .- 1089-7623. ; 88
  • Tidskriftsartikel (refereegranskat)abstract
    • Fitting parametrized functions to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters, and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data is used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution, and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
  •  
8.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Including experimental information in TMC using file weights from automatically generated experimental covariance matrices
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.For many applications, the weighting does not have much impact, something which can be explained by too narrow prior distributions. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, which can be due to the prior parameter distributions or model defects.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however. To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
9.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Incorporating Experimental Information in the Total Monte Carlo Methodology Using File Weights
  • 2015
  • Ingår i: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 123:SI, s. 214-219
  • Tidskriftsartikel (refereegranskat)abstract
    • Some criticism has been directed towards the Total Monte Carlo method because experimental information has not been taken into account in a statistically well-founded manner. In this work, a Bayesian calibration method is implemented by assigning weights to the random nuclear data files and the method is illustratively applied to a few applications. In some considered cases, the estimated nuclear data uncertainties are significantly reduced and the central values are significantly shifted. The study suggests that the method can be applied both to estimate uncertainties in a more justified way and in the search for better central values. Some improvements are however necessary; for example, the treatment of outliers and cross-experimental correlations should be more rigorous and random files that are intended to be prior files should be generated.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 35

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy