SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Koning Arjan J.) ;pers:(Helgesson Petter 1986)"

Sökning: WFRF:(Koning Arjan J.) > Helgesson Petter 1986

  • Resultat 1-8 av 8
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Alhassan, Erwin, et al. (författare)
  • Reducing A Priori 239Pu Nuclear Data Uncertainty In The Keff Using A Set Of Criticality Benchmarks With Different Nuclear Data Libraries
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group for nuclear data uncertainty propagation, model calculations are compared with differential experimental data and a specific a priori uncertainty is assigned to each model parameter. By varying the model parameters all together within model parameter uncertainties, a full covariance matrix is obtained with its off diagonal elements if desired [1]. In this way, differential experimental data serve as a constraint for the model parameters used in the TALYS nuclear reactions code for the production of random nuclear data files. These files are processed into usable formats and used in transport codes for reactor calculations and for uncertainty propagation to reactor macroscopic parameters of interest. Even though differential experimental data together with their uncertainties are included (implicitly) in the production of these random nuclear data files in the TMC method, wide spreads in parameter distributions have been observed, leading to large uncertainties in reactor parameters for some nuclides for the European Lead cooled Training Reactor [2]. Due to safety concerns and the development of GEN-IV reactors with their challenging technological goals, the present uncertainties should be reduced significantly if the benefits from advances in modelling and simulations are to be utilized fully [3]. In Ref.[4], a binary accept/reject approach and a more rigorous method of assigning file weights based on the likelihood function were proposed and presented for reducing nuclear data uncertainties using a set of integral benchmarks obtained from the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP). These methods are depended on the reference nuclear data library used, the combined benchmark uncertainty and the relevance of each benchmark for reducing nuclear data uncertainties for a particular reactor system. Since each nuclear data library normally comes with its own nominal values and covariance matrices, reactor calculations and uncertainties computed with these libraries differ from library to library. In this work, we apply the binary accept/reject approach and the method of assigning file weights based on the likelihood function for reducing a priori 239Pu nuclear data uncertainties for the European Lead Cooled Training Reactor (ELECTRA) using a set of criticality benchmarks. Prior and posterior uncertainties computed for ELECTRA using ENDF/B-VII.1, JEFF-3.2 and JENDL-4.0 are compared after including experimental information from over 10 benchmarks.[1] A.J. Koning and D. Rochman, Modern Nuclear Data Evaluation with the TALYS Code System. Nuclear Data Sheets 113 (2012) 2841-2934. [2] E. Alhassan, H. Sjöstrand, P. Helgesson, A. J. Koning, M. Österlund, S. Pomp, D. Rochman, Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training reactor (ELECTRA). Annals of Nuclear Energy 75 (2015) 26-37. [3] G. Palmiotti, M. Salvatores, G. Aliberti, H. Hiruta, R. McKnight, P. Oblozinsky, W. Yang, A global approach to the physics validation of simulation codes for future nuclear systems, Annals of Nuclear Energy 36 (3) (2009) 355-361. [4] E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Österlund, D. Rochman, A.J. Koning, Selecting benchmarks for reactor calculations: In proc. PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future, kyoto, Japan, Sep. 28 - 3 Oct. (2014).
  •  
2.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Combining Total Monte Carlo and Unified Monte Carlo : Bayesian nuclear data uncertainty quantification from auto-generated experimental covariances
  • 2017
  • Ingår i: Progress in nuclear energy (New series). - : Elsevier. - 0149-1970 .- 1878-4224. ; 96, s. 76-96
  • Tidskriftsartikel (refereegranskat)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how TMC and Unified Monte Carlo-B (UMC-B) are combined to include experimental data in TMC. Random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.In this implementation, the impact from the weighting is small for many of the applications. In some cases, this can be explained by the fact that the distributions used as priors are too narrow to be valid as such. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, i.e., the region of interest is poorly resolved. This convergence issue can be due to the parameter distributions used as priors or model defects, for example.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however.To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
3.
  •  
4.
  • Helgesson, Petter, 1986-, et al. (författare)
  • New 59Ni data including uncertainties and consequences for gas production in steel in LWR spectraNew 59Ni data including uncertainties and consequences for gas production in steel in LWR spectra
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Abstract: With ageing reactor fleets, the importance of estimating material damage parameters in structural materials is increasing. 59Ni is not naturally abundant, but as noted in, e.g., Ref. [1], the two-step reaction 58Ni(n,γ)59Ni(n,α)56Fe gives a very important contribution to the helium production and damage energy in stainless steel in thermal spectra, because of the extraordinarily large thermal (n,α) cross section for 59Ni (for most other nuclides, the (n,α) reaction has a threshold). None of the evaluated data libraries contain uncertainty information for (n,α) and (n,p) for 59Ni for thermal energies and the resonance region. Therefore, new such data is produced in this work, including random data to be used with the Total Monte Carlo methodology [2] for nuclear data uncertainty propagation.                  The limited R-matrix format (“LRF = 7”) of ENDF-6 is used, with the Reich-Moore approximation (“LRF = 3” is just a subset of Reich-Moore). The neutron and gamma widths are obtained from TARES [2], with uncertainties, and are translated into LRF = 7. The α and proton widths are obtained from the little information available in EXFOR [3] (assuming large uncertainties because of lacking documentation) or from sampling from unresolved resonance parameters from TALYS [2], and they are split into different channels (different excited states of the recoiling nuclide, etc.). Finally, the cross sections are adjusted to match the experiments at thermal energies, with uncertainties.                  The data is used to estimate the gas production rates for different systems, including the propagated nuclear data uncertainty. Preliminary results for SS304 in a typical thermal spectrum, show that including 59Ni at its peak concentration increases the helium production rate by a factor of 4.93 ± 0.28 including a 5.7 ± 0.2 % uncertainty due to the 59Ni data. It is however likely that the uncertainty will increase substantially from including the uncertainty of other nuclides and from re-evaluating the experimental thermal cross sections.
  •  
5.
  •  
6.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
  • 2016
  • Ingår i: Nuclear Instruments and Methods in Physics Research Section A. - : Elsevier. - 0168-9002 .- 1872-9576. ; 807, s. 137-149
  • Tidskriftsartikel (refereegranskat)abstract
    • In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also be used in cases where the experimental uncertainties are not Gaussian, and for other purposes than to compute the likelihood, e.g., to produce random experimental data sets for a more direct use in ND evaluation.
  •  
7.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Towards Transparent, Reproducible And Justified Nuclear Data Uncertainty Propagation For Lwr Applications
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Any calculated quantity is practically meaningless without estimates on the uncertainty of theobtained results, not the least when it comes to, e.g., safety parameters in a nuclear reactor. Oneof the sources of uncertainty in reactor physics computations or simulations are the uncertaintiesof the so called nuclear data, i.e., cross sections, angular distributions, fission yields, etc. Thecurrently dominating method for propagating nuclear data uncertainties (using covariance dataand sensitivity analysis) suffers from several limitations, not the least in how the the covariancedata is produced – the production relies to a large extent on personal judgment of nuclear dataevaluators, leading to results which are difficult to reproduce from fundamental principles.Further, such a method assumes linearity, it in practice limits both input and output to bemodeled as Gaussian distributions, and the covariance data in the established nuclear datalibraries is incomplete.“Total Monte Carlo” (TMC) is a nuclear data uncertainty propagation method based on randomsampling of nuclear reaction model parameters which aims to resolve these issues. The methodhas been applied to various applications, ranging from pin cells and criticality safety benchmarksto full core neutronics as well as models including thermo-hydraulics and transients. However,TMC has been subject to some critique since the distributions of the nuclear model parameters,and hence of the nuclear data, has not been deduced from really rigorous statistical theory. Thispresentation briefly discusses the ongoing work on how to use experimental data to approachjustified results from TMC, including the effects of correlations between experimental datapoints and the assessment of such correlations. In this study, the random nuclear data libraries areprovided with likelihood weights based on their agreement to the experimental data, as a meansto implement Bayes' theorem.Further, it is presented how TMC is applied to an MCNP-6 model of shielding fuel assemblies(SFA) at Ringhals 3 and 4. Since the damage from the fast neutron flux may limit the lifetimes ofthese reactors, parts of the fuel adjacent to the pressure vessel is replaced by steel (the SFA) toprotect the vessel, in particular the four points along the belt-line weld which have been exposedto the largest fluence over time. The 56Fe data uncertainties are considered, and the estimatedrelative uncertainty at a quarter of the pressure vessel is viewed in Figure 1 (right) as well as theflux pattern itself (left). The uncertainty in the flux reduction at a selected sensitive point is 2.5± 0.2 % (one standard deviation). Applying the likelihood weights does not have muchimpact for this case, which could indicate that the prior distribution for the 56Fe data is too“narrow” (the used libraries are not really intended to describe a prior distribution), and that thetrue uncertainty is substantially greater. Another explanation could be that the dominating sourceof uncertainty is the high-energy resonances which are treated inefficiently by such weights.In either case, the efforts to approach justified, transparent, reproducible and highly automatizednuclear data uncertainties shall continue. On top of using libraries that are intended to describeprior distributions and treating the resonance region appropriately, the experimental correlationsshould be better motivated and the treatment of outliers shall be improved. Finally, it is probablynecessary to use experimental data in a more direct sense where a lot of experimental data isavailable, since the nuclear models are imperfect.Figure 1. The high energy neutron flux at the reactor pressure vessel in the SFA model, and thecorresponding propagated 56Fe data uncertainty.
  •  
8.
  • Sjöstrand, Henrik, 1978-, et al. (författare)
  • Propagation Of Nuclear Data Uncertainties For Fusion Power Measurements
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • Fusion plasmas produce neutrons and by measuring the neutron emission the fusion power can be inferred. Accurate neutron yield measurements are paramount for the safe and efficient operation of fusion experiments and, eventually, fusion power plants.Neutron measurements are an essential part of the diagnostic system at large fusion machines such as JET and ITER. At JET, a system of activation foils provides the absolute calibration for the neutron yield determination.  The activation system uses the property of certain nuclei to emit radiation after being excited by neutron reactions. A sample of suitable nuclei is placed in the neutron flux close to the plasma and after irradiation the induced radiation is measured.  Knowing the neutron activation cross section one can calculate the time-integrated neutron flux at the sample position. To relate the local flux to the total neutron yield, the spatial flux response has to be identified. This describes how the local neutron emission affects the flux at the detector.  The required spatial flux response is commonly determined using neutron transport codes, e.g., MCNP.Nuclear data is used as input both in the calculation of the spatial flux response and when the flux at the irradiation site is inferred. Consequently, high quality nuclear data is essential for the proper determination of the neutron yield and fusion power.  However, uncertainties due to nuclear data are generally not fully taken into account in today’s uncertainty analysis for neutron yield calibrations using activation foils.  In this paper, the neutron yield uncertainty due to nuclear data is investigated using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of JET fusion machine.  In this work the uncertainties due to the cross sections and angular distributions in JET structural materials, as well as the activation cross sections, are analyzed. It is shown that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-8 av 8

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy