SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Dimitri Rochman) "

Sökning: WFRF:(Dimitri Rochman)

  • Resultat 1-42 av 42
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Alhassan, Erwin, et al. (författare)
  • Combining Total Monte Carlo and Benchmarks for Nuclear Data Uncertainty Propagation on a Lead Fast Reactor's Safety Parameters
  • 2014
  • Ingår i: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 118, s. 542-544
  • Tidskriftsartikel (refereegranskat)abstract
    • Analyses are carried out to assess the impact of nuclear data uncertainties on some reactor safety parameters for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-format libraries, generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo code to obtain distribution in reactor safety parameters. The distribution in keff obtained was compared with the latest major nuclear data libraries – JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach based on a correlation observed between the keff of a given system and the benchmark. Finally, an accept/reject criteria was investigated based on chi squared values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties were reduced considerably from 748 to 443 pcm.
  •  
2.
  • Alhassan, Erwin, et al. (författare)
  • Iterative Bayesian Monte Carlo for nuclear data evaluation
  • 2022
  • Ingår i: NUCLEAR SCIENCE AND TECHNIQUES. - : Springer Nature. - 1001-8042 .- 2210-3147. ; 33:4
  • Tidskriftsartikel (refereegranskat)abstract
    • In this work, we explore the use of an iterative Bayesian Monte Carlo (iBMC) method for nuclear data evaluation within a TALYS Evaluated Nuclear Data Library (TENDL) framework. The goal is to probe the model and parameter space of the TALYS code system to find the optimal model and parameter sets that reproduces selected experimental data. The method involves the simultaneous variation of many nuclear reaction models as well as their parameters included in the TALYS code. The `best' model set with its parameter set was obtained by comparing model calculations with selected experimental data. Three experimental data types were used: (1) reaction cross sections, (2) residual production cross sections, and (3) the elastic angular distributions. To improve our fit to experimental data, we update our 'best' parameter set-the file that maximizes the likelihood function-in an iterative fashion. Convergence was determined by monitoring the evolution of the maximum likelihood estimate (MLE) values and was considered reached when the relative change in the MLE for the last two iterations was within 5%. Once the final 'best' file is identified, we infer parameter uncertainties and covariance information to this file by varying model parameters around this file. In this way, we ensured that the parameter distributions are centered on our evaluation. The proposed method was applied to the evaluation of p+ Co-59 between 1 and 100 MeV. Finally, the adjusted files were compared with experimental data from the EXFOR database as well as with evaluations from the TENDL-2019, JENDL/He-2007 and JENDL-4.0/HE nuclear data libraries.
  •  
3.
  • Alhassan, Erwin (författare)
  • Nuclear data uncertainty propagation for a lead-cooled fast reactor: Combining TMC with criticality benchmarks for improved accuracy
  • 2014
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • For the successful deployment of advanced nuclear systems and for optimization of current reactor designs, high quality and accurate nuclear data are required. Before nuclear data can be used in applications, they are first evaluated, benchmarked against integral experiments and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complimented with nuclear model calculations. This trend is fast changing because of increase in computational power and tremendous improvements in nuclear reaction theories over the last decade. Since these model codes are not perfect, they are usually validated against a large set of experimental data. However, since these experiments are themselves not exact, the calculated quantities of model codes such as cross sections, angular distributions etc., contain uncertainties. A major source of uncertainty being the input parameters to these model codes. Since nuclear data are used in reactor transport codes asinput for simulations, the output of transport codes ultimately contain uncertainties due to these data. Quantifying these uncertainties is therefore important for reactor safety assessment and also for deciding where additional efforts could be taken to reduce further, these uncertainties.Until recently, these uncertainties were mostly propagated using the generalized perturbation theory. With the increase in computational power however, more exact methods based on Monte Carlo are now possible. In the Nuclear Research and Consultancy Group (NRG), Petten, the Netherlands, a new method called ’Total Monte carlo (TMC)’ has been developed for nuclear data evaluation and uncertainty propagation. An advantage of this approach is that, it eliminates the use of covariances and the assumption of linearity that is used in the perturbation approach.In this work, we have applied the TMC methodology for assessing the impact of nuclear data uncertainties on reactor macroscopic parameters of the European Lead Cooled Training Reactor (ELECTRA). ELECTRA has been proposed within the GEN-IV initiative within Sweden. As part of the work, the uncertainties of plutonium isotopes and americium within the fuel, uncertainties of the lead isotopes within the coolant and some structural materials of importance have been investigated at the beginning of life. For the actinides, large uncertainties were observed in the k-eff due to Pu-238, 239, 240 nuclear data while for the lead coolant, the uncertainty in the k-eff for all the lead isotopes except for Pb-204 were large with significant contribution coming from Pb-208. The dominant contributions to the uncertainty in the k-eff came from uncertainties in the resonance parameters for Pb-208.Also, before the final product of an evaluation is released, evaluated data are tested against a large set of integral benchmark experiments. Since these benchmarks differ in geometry, type, material composition and neutron spectrum, their selection for specific applications is normally tedious and not straight forward. As a further objective in this thesis, methodologies for benchmark selection based the TMC method have been developed. This method has also been applied for nuclear data uncertainty reduction using integral benchmarks. From the results obtained, it was observed that by including criticality benchmark experiment information using a binary accept/reject method, a 40% and 20% reduction in nuclear data uncertainty in the k-eff was achieved for Pu-239 and Pu-240 respectively for ELECTRA.
  •  
4.
  • Alhassan, Erwin, 1984- (författare)
  • Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor : Using integral experiments for improved accuracy
  • 2015
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they must first be evaluated, tested and validated against a set of integral experiments, and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complemented with nuclear model calculations. This trend is fast changing due to the increase in computational power and tremendous improvements in nuclear reaction models over the last decade. Since these models have uncertain inputs, they are normally calibrated using experimental data. However, these experiments are themselves not exact. Therefore, the calculated quantities of model codes such as cross sections and angular distributions contain uncertainties. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes contain uncertainties due to these data as well. Quantifying these uncertainties is important for setting safety margins; for providing confidence in the interpretation of results; and for deciding where additional efforts are needed to reduce these uncertainties. Also, regulatory bodies are now moving away from conservative evaluations to best estimate calculations that are accompanied by uncertainty evaluations.In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nuclear data uncertainties from basic physics to macroscopic reactor parameters for the European Lead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertainties of actinides in the fuel, lead isotopes within the coolant, and some structural materials have been investigated. In the case of the lead coolant it was observed that the uncertainty in the keff and the coolant void worth (except in the case of 204Pb), were large, with the most significant contribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realistic central values have been produced as part of this work. Also, a correlation based sensitivity method was used in this work, to determine parameter - cross section correlations for different isotopes and energy groups.Furthermore, an accept/reject method and a method of assigning file weights based on the likelihood function are proposed for uncertainty reduction using criticality benchmark experiments within the TMC method. It was observed from the study that a significant reduction in nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporating integral benchmark information. As a further objective of this thesis, a method for selecting benchmark for code validation for specific reactor applications was developed and applied to the ELECTRA reactor. Finally, a method for combining differential experiments and integral benchmark data for nuclear data adjustments is proposed and applied for the adjustment of neutron induced 208Pb nuclear data in the fast energy region.
  •  
5.
  • Alhassan, Erwin, et al. (författare)
  • Reducing A Priori 239Pu Nuclear Data Uncertainty In The Keff Using A Set Of Criticality Benchmarks With Different Nuclear Data Libraries
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group for nuclear data uncertainty propagation, model calculations are compared with differential experimental data and a specific a priori uncertainty is assigned to each model parameter. By varying the model parameters all together within model parameter uncertainties, a full covariance matrix is obtained with its off diagonal elements if desired [1]. In this way, differential experimental data serve as a constraint for the model parameters used in the TALYS nuclear reactions code for the production of random nuclear data files. These files are processed into usable formats and used in transport codes for reactor calculations and for uncertainty propagation to reactor macroscopic parameters of interest. Even though differential experimental data together with their uncertainties are included (implicitly) in the production of these random nuclear data files in the TMC method, wide spreads in parameter distributions have been observed, leading to large uncertainties in reactor parameters for some nuclides for the European Lead cooled Training Reactor [2]. Due to safety concerns and the development of GEN-IV reactors with their challenging technological goals, the present uncertainties should be reduced significantly if the benefits from advances in modelling and simulations are to be utilized fully [3]. In Ref.[4], a binary accept/reject approach and a more rigorous method of assigning file weights based on the likelihood function were proposed and presented for reducing nuclear data uncertainties using a set of integral benchmarks obtained from the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP). These methods are depended on the reference nuclear data library used, the combined benchmark uncertainty and the relevance of each benchmark for reducing nuclear data uncertainties for a particular reactor system. Since each nuclear data library normally comes with its own nominal values and covariance matrices, reactor calculations and uncertainties computed with these libraries differ from library to library. In this work, we apply the binary accept/reject approach and the method of assigning file weights based on the likelihood function for reducing a priori 239Pu nuclear data uncertainties for the European Lead Cooled Training Reactor (ELECTRA) using a set of criticality benchmarks. Prior and posterior uncertainties computed for ELECTRA using ENDF/B-VII.1, JEFF-3.2 and JENDL-4.0 are compared after including experimental information from over 10 benchmarks.[1] A.J. Koning and D. Rochman, Modern Nuclear Data Evaluation with the TALYS Code System. Nuclear Data Sheets 113 (2012) 2841-2934. [2] E. Alhassan, H. Sjöstrand, P. Helgesson, A. J. Koning, M. Österlund, S. Pomp, D. Rochman, Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training reactor (ELECTRA). Annals of Nuclear Energy 75 (2015) 26-37. [3] G. Palmiotti, M. Salvatores, G. Aliberti, H. Hiruta, R. McKnight, P. Oblozinsky, W. Yang, A global approach to the physics validation of simulation codes for future nuclear systems, Annals of Nuclear Energy 36 (3) (2009) 355-361. [4] E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Österlund, D. Rochman, A.J. Koning, Selecting benchmarks for reactor calculations: In proc. PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future, kyoto, Japan, Sep. 28 - 3 Oct. (2014).
  •  
6.
  • Alhassan, Erwin, et al. (författare)
  • Selecting benchmarks for reactor calculations
  • 2014
  • Ingår i: PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future.
  • Konferensbidrag (refereegranskat)abstract
    • Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and there by resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications based on Total Monte Carlo (TMC). Similarities betweenan application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose an approach for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain on nuclear reaction models: a binary accept/reject criterion. Finally, the method was applied to a full Lead Fast Reactor core and a set of criticality benchmarks.
  •  
7.
  • Alhassan, Erwin, et al. (författare)
  • Selecting benchmarks for reactor simulations : an application to a Lead Fast Reactor
  • 2016
  • Ingår i: Annals of Nuclear Energy. - : Elsevier BV. - 0306-4549 .- 1873-2100. ; 96, s. 158-169
  • Tidskriftsartikel (refereegranskat)abstract
    • For several decades reactor design has been supported by computer codes for the investigation of reactor behavior under both steady state and transient conditions. The use of computer codes to simulate reactor behavior enables the investigation of various safety scenarios saving time and cost. There has been an increase in the development of in-house (local) codes by various research groups in recent times for preliminary design of specific or targeted nuclear reactor applications. These codes must be validated and calibrated against experimental benchmark data with their evolution and improvements. Given the large number of benchmarks available, selecting these benchmarks for reactor calculations and validation of simulation codes for specific or target applications can be rather tedious and difficult. In the past, the traditional approach based on expert judgement using information provided in various handbooks, has been used for the selection of these benchmarks. This approach has been criticized because it introduces a user bias into the selection process. This paper presents a method for selecting these benchmarks for reactor calculations for specific reactor applications based on the Total Monte Carlo (TMC) method. First, nuclear model parameters are randomly sampled within a given probability distribution and a large set of random nuclear data files are produced using the TALYS code system. These files are processed and used to analyze a target reactor system and a set of criticality benchmarks. Similarity between the target reactor system and one or several benchmarks is quantified using a similarity index. The method has been applied to the European Lead Cooled Reactor (ELECTRA) and a set of plutonium and lead sensitive criticality benchmarks using the effective multiplication factor (keffkeff). From the study, strong similarity were observed in the keffkeff between ELECTRA and some plutonium and lead sensitive criticality benchmarks. Also, for validation purposes, simulation results for a list of selected criticality benchmarks simulated with the MCNPX and SERPENT codes using different nuclear data libraries have been compared with experimentally measured benchmark keff values.
  •  
8.
  • Alhassan, Erwin, et al. (författare)
  • Uncertainty analysis of Lead cross sections on reactor safety for ELECTRA
  • 2016
  • Ingår i: SNA + MC 2013 - Joint International Conference on Supercomputing in Nuclear Applications + Monte Carlo. - Les Ulis, France : EDP Sciences.
  • Konferensbidrag (refereegranskat)abstract
    • The Total Monte Carlo (TMC) method was used in this study to assess the impact of Pb-206, 207 and 208 nucleardata uncertainties on k-eff , beta-eff, coolant temperature coefficient, the coolant void worth for the ELECTRA reactor. Relatively large uncertainties were observed in the k-eff and the coolant void worth for all the isotopes with significant contribution coming from Pb-208 nuclear data. The large Pb-208 nuclear data uncertainty observed was further investigated by studying the impact of partial channels on the k-eff and beta-eff. Various sections of ENDF file: elasticscattering (n,el), inelastic scattering (n,inl), neutron capture (n,gamma), (n,2n), resonance parameters and the angular distribution were varied randomly and distributions in k-eff and beta-eff obtained. The dominant contributions to the uncertainty in the k-eff from Pb-208 came from uncertainties in the resonance parameters; however, elastic scattering cross section and the angular distribution also had significant impact. The impact of nuclear data uncertainties on the beta-eff was observed to be small.
  •  
9.
  • Alhassan, Erwin, et al. (författare)
  • Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training Reactor
  • 2015
  • Ingår i: Annals of Nuclear Energy. - : Elsevier BV. - 0306-4549 .- 1873-2100. ; 75, s. 26-37
  • Tidskriftsartikel (refereegranskat)abstract
    • The Total Monte Carlo (TMC) method was used in this study to assess the impact of Pb-204, 206, 207, 208 nuclear data uncertainties on reactor safety parameters for the ELECTRA reactor. Relatively large uncertainties were observed in the k-eff and the coolant void worth (CVW) for all isotopes except for Pb-204 with signicant contribution coming from Pb-208 nuclear data; the dominant eectcame from uncertainties in the resonance parameters; however, elastic scattering cross section and the angular distributions also had signicant impact. It was also observed that the k-eff distribution for Pb-206, 207, 208 deviates from a Gaussian distribution with tails in the high k-eff region. An uncertainty of 0.9% on the k-eff and 3.3% for the CVW due to lead nuclear data were obtained. As part of the work, cross section-reactor parameter correlations were also studied using a Monte Carlo sensitivity method. Strong correlations were observed between the k-eff and (n,el) cross section for all the lead isotopes. The correlation between the (n,inl) and the k-eff was also found to be signicant.
  •  
10.
  • Birgersson, Evert, et al. (författare)
  • Binary fission-fragment yields from the reaction 251Cf(nth, f)
  • 2005
  • Ingår i: Nuclear fission and fission-product spectroscopy. - : American Institute of Physics. - 0735402884 ; , s. 349-352
  • Konferensbidrag (refereegranskat)abstract
    • The recoil mass spectrometer LOHENGRIN of the Laue-Langevin Institute, Grenoble has been used to measure the light fission-fragment mass yield and kinetic energy distributions from neutron-induced 252Cf*, using 251Cf as target material. ©2005 American Institute of Physics
  •  
11.
  • Birgersson, Evert, et al. (författare)
  • Light fission-fragment mass distribution from the reaction 251Cf(nth, f)
  • 2007
  • Ingår i: Nuclear Physics A. - Amsterdam : Elsevier. - 0375-9474 .- 1873-1554. ; 791:1-2, s. 1-23
  • Tidskriftsartikel (refereegranskat)abstract
    • For mass numbers A = 80 to 124 the recoil mass spectrometer LOHENGRIN of the Institute Laue-Langevin in Grenoble was used to measure with high resolution the light fission-fragment mass yields and kinetic energy distributions from thermal-neutron induced fission of 252Cf* for the first time, using 251Cf as target material. The obtained mean light fragment mass AL = (107 ± 2) and the corresponding mean kinetic energy Ek,L = (103±2) MeV are within the expected trend. Emission yields around A = 115 are enhanced and the corresponding mean kinetic energy is higher compared to spontaneous fission of 252Cf. This could be explained by the existence of an additional super-deformed fission mode.
  •  
12.
  • Duan, Junfeng, 1976-, et al. (författare)
  • Uncertainty Study of Nuclear Model Parameters for the n+Fe-56 Reactions in the Fast Neutron Region below 20 MeV
  • 2014
  • Ingår i: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 118, s. 346-348
  • Tidskriftsartikel (refereegranskat)abstract
    • In this work we study the uncertainty of nuclear model parameters for neutron induced Fe-56 reactions in the fast neutron region by using the Total Monte Carlo method. We perform a large number of TALYS runs and compare the calculated results with the experimental data of the cross sections to obtain the uncertainties of the model parameters. Based on the derived uncertainties another 1000 TALYS runs have been performed to create random cross section files. For comparison with the experimental data we calculate a weighted chi(2) value for each random file as well as the ENDF/B-VII. 1, JEFF-3.1, JENDL-4.0 and CENDL-3.1 data libraries. Furthermore, we investigate the optical model parameters correlation obtained by way of this procedure.
  •  
13.
  • Fischer, Ulrich, et al. (författare)
  • Nuclear data activities of the EUROfusion consortium
  • 2020
  • Ingår i: ND 2019. - : EDP Sciences. - 9782759891061
  • Konferensbidrag (refereegranskat)abstract
    • The activities of the EUROfusion consortiums on the development of high quality nuclear data for fusion applications are presented. The activities, implemented in the Power Plant Physics and Technology (PPPT) programme of EUROfusion, include nuclear data evaluations for neutron and deuteron induced reactions and the production of related data libraries which satisfy the needs for nuclear analyses of the DEMO fusion power plant and the IFMIF-DONES neutron source. The activities are closely linked to the JEFF initiative of the NEA Data Bank. The evaluation work is complemented by extensive benchmark, sensitivity and uncertainty analyses to check the performance of the evaluated cross-section data and libraries against integral experiments.
  •  
14.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Combining Total Monte Carlo and Unified Monte Carlo : Bayesian nuclear data uncertainty quantification from auto-generated experimental covariances
  • 2017
  • Ingår i: Progress in nuclear energy (New series). - : Elsevier. - 0149-1970 .- 1878-4224. ; 96, s. 76-96
  • Tidskriftsartikel (refereegranskat)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how TMC and Unified Monte Carlo-B (UMC-B) are combined to include experimental data in TMC. Random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.In this implementation, the impact from the weighting is small for many of the applications. In some cases, this can be explained by the fact that the distributions used as priors are too narrow to be valid as such. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, i.e., the region of interest is poorly resolved. This convergence issue can be due to the parameter distributions used as priors or model defects, for example.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however.To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
15.
  •  
16.
  • Helgesson, Petter, 1986- (författare)
  • Experimental data and Total Monte Carlo : Towards justified, transparent and complete nuclear data uncertainties
  • 2015
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The applications of nuclear physics are many with one important being nuclear power, which can help decelerating the climate change. In any of these applications, so-called nuclear data (ND, numerical representations of nuclear physics) is used in computations and simulations which are necessary for, e.g., design and maintenance. The ND is not perfectly known - there are uncertainties associated with it - and this thesis concerns the quantification and propagation of these uncertainties. In particular, methods are developed to include experimental data in the Total Monte Carlo methodology (TMC). The work goes in two directions. One is to include the experimental data by giving weights to the different "random files" used in TMC. This methodology is applied to practical cases using an automatic interpretation of an experimental database, including uncertainties and correlations. The weights are shown to give a consistent implementation of Bayes' theorem, such that the obtained uncertainty estimates in theory can be correct, given the experimental data. In the practical implementation, it is more complicated. This is much due to the interpretation of experimental data, but also because of model defects - the methodology assumes that there are parameter choices such that the model of the physics reproduces reality perfectly. This assumption is not valid, and in future work, model defects should be taken into account. Experimental data should also be used to give feedback to the distribution of the parameters, and not only to provide weights at a later stage.The other direction is based on the simulation of the experimental setup as a means to analyze the experiments in a structured way, and to obtain the full joint distribution of several different data points. In practice, this methodology has been applied to the thermal (n,α), (n,p), (n,γ) and (n,tot) cross sections of 59Ni. For example, the estimated expected value and standard deviation for the (n,α) cross section is (12.87 ± 0.72) b, which can be compared to the established value of (12.3 ± 0.6) b given in the work of Mughabghab. Note that also the correlations to the other thermal cross sections as well as other aspects of the distribution are obtained in this work - and this can be important when propagating the uncertainties. The careful evaluation of the thermal cross sections is complemented by a coarse analysis of the cross sections of 59Ni at other energies. The resulting nuclear data is used to study the propagation of the uncertainties through a model describing stainless steel in the spectrum of a thermal reactor. In particular, the helium production is studied. The distribution has a large uncertainty (a standard deviation of (17 ± 3) \%), and it shows a strong asymmetry. Much of the uncertainty and its shape can be attributed to the more coarse part of the uncertainty analysis, which, therefore, shall be refined in the future.
  •  
17.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Including experimental information in TMC using file weights from automatically generated experimental covariance matrices
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.For many applications, the weighting does not have much impact, something which can be explained by too narrow prior distributions. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, which can be due to the prior parameter distributions or model defects.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however. To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
18.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Incorporating Experimental Information in the Total Monte Carlo Methodology Using File Weights
  • 2015
  • Ingår i: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 123:SI, s. 214-219
  • Tidskriftsartikel (refereegranskat)abstract
    • Some criticism has been directed towards the Total Monte Carlo method because experimental information has not been taken into account in a statistically well-founded manner. In this work, a Bayesian calibration method is implemented by assigning weights to the random nuclear data files and the method is illustratively applied to a few applications. In some considered cases, the estimated nuclear data uncertainties are significantly reduced and the central values are significantly shifted. The study suggests that the method can be applied both to estimate uncertainties in a more justified way and in the search for better central values. Some improvements are however necessary; for example, the treatment of outliers and cross-experimental correlations should be more rigorous and random files that are intended to be prior files should be generated.
  •  
19.
  • Helgesson, Petter, 1986-, et al. (författare)
  • New 59Ni data including uncertainties and consequences for gas production in steel in LWR spectraNew 59Ni data including uncertainties and consequences for gas production in steel in LWR spectra
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Abstract: With ageing reactor fleets, the importance of estimating material damage parameters in structural materials is increasing. 59Ni is not naturally abundant, but as noted in, e.g., Ref. [1], the two-step reaction 58Ni(n,γ)59Ni(n,α)56Fe gives a very important contribution to the helium production and damage energy in stainless steel in thermal spectra, because of the extraordinarily large thermal (n,α) cross section for 59Ni (for most other nuclides, the (n,α) reaction has a threshold). None of the evaluated data libraries contain uncertainty information for (n,α) and (n,p) for 59Ni for thermal energies and the resonance region. Therefore, new such data is produced in this work, including random data to be used with the Total Monte Carlo methodology [2] for nuclear data uncertainty propagation.                  The limited R-matrix format (“LRF = 7”) of ENDF-6 is used, with the Reich-Moore approximation (“LRF = 3” is just a subset of Reich-Moore). The neutron and gamma widths are obtained from TARES [2], with uncertainties, and are translated into LRF = 7. The α and proton widths are obtained from the little information available in EXFOR [3] (assuming large uncertainties because of lacking documentation) or from sampling from unresolved resonance parameters from TALYS [2], and they are split into different channels (different excited states of the recoiling nuclide, etc.). Finally, the cross sections are adjusted to match the experiments at thermal energies, with uncertainties.                  The data is used to estimate the gas production rates for different systems, including the propagated nuclear data uncertainty. Preliminary results for SS304 in a typical thermal spectrum, show that including 59Ni at its peak concentration increases the helium production rate by a factor of 4.93 ± 0.28 including a 5.7 ± 0.2 % uncertainty due to the 59Ni data. It is however likely that the uncertainty will increase substantially from including the uncertainty of other nuclides and from re-evaluating the experimental thermal cross sections.
  •  
20.
  •  
21.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
  • 2016
  • Ingår i: Nuclear Instruments and Methods in Physics Research Section A. - : Elsevier. - 0168-9002 .- 1872-9576. ; 807, s. 137-149
  • Tidskriftsartikel (refereegranskat)abstract
    • In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also be used in cases where the experimental uncertainties are not Gaussian, and for other purposes than to compute the likelihood, e.g., to produce random experimental data sets for a more direct use in ND evaluation.
  •  
22.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Towards Transparent, Reproducible And Justified Nuclear Data Uncertainty Propagation For Lwr Applications
  • 2015
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Any calculated quantity is practically meaningless without estimates on the uncertainty of theobtained results, not the least when it comes to, e.g., safety parameters in a nuclear reactor. Oneof the sources of uncertainty in reactor physics computations or simulations are the uncertaintiesof the so called nuclear data, i.e., cross sections, angular distributions, fission yields, etc. Thecurrently dominating method for propagating nuclear data uncertainties (using covariance dataand sensitivity analysis) suffers from several limitations, not the least in how the the covariancedata is produced – the production relies to a large extent on personal judgment of nuclear dataevaluators, leading to results which are difficult to reproduce from fundamental principles.Further, such a method assumes linearity, it in practice limits both input and output to bemodeled as Gaussian distributions, and the covariance data in the established nuclear datalibraries is incomplete.“Total Monte Carlo” (TMC) is a nuclear data uncertainty propagation method based on randomsampling of nuclear reaction model parameters which aims to resolve these issues. The methodhas been applied to various applications, ranging from pin cells and criticality safety benchmarksto full core neutronics as well as models including thermo-hydraulics and transients. However,TMC has been subject to some critique since the distributions of the nuclear model parameters,and hence of the nuclear data, has not been deduced from really rigorous statistical theory. Thispresentation briefly discusses the ongoing work on how to use experimental data to approachjustified results from TMC, including the effects of correlations between experimental datapoints and the assessment of such correlations. In this study, the random nuclear data libraries areprovided with likelihood weights based on their agreement to the experimental data, as a meansto implement Bayes' theorem.Further, it is presented how TMC is applied to an MCNP-6 model of shielding fuel assemblies(SFA) at Ringhals 3 and 4. Since the damage from the fast neutron flux may limit the lifetimes ofthese reactors, parts of the fuel adjacent to the pressure vessel is replaced by steel (the SFA) toprotect the vessel, in particular the four points along the belt-line weld which have been exposedto the largest fluence over time. The 56Fe data uncertainties are considered, and the estimatedrelative uncertainty at a quarter of the pressure vessel is viewed in Figure 1 (right) as well as theflux pattern itself (left). The uncertainty in the flux reduction at a selected sensitive point is 2.5± 0.2 % (one standard deviation). Applying the likelihood weights does not have muchimpact for this case, which could indicate that the prior distribution for the 56Fe data is too“narrow” (the used libraries are not really intended to describe a prior distribution), and that thetrue uncertainty is substantially greater. Another explanation could be that the dominating sourceof uncertainty is the high-energy resonances which are treated inefficiently by such weights.In either case, the efforts to approach justified, transparent, reproducible and highly automatizednuclear data uncertainties shall continue. On top of using libraries that are intended to describeprior distributions and treating the resonance region appropriately, the experimental correlationsshould be better motivated and the treatment of outliers shall be improved. Finally, it is probablynecessary to use experimental data in a more direct sense where a lot of experimental data isavailable, since the nuclear models are imperfect.Figure 1. The high energy neutron flux at the reactor pressure vessel in the SFA model, and thecorresponding propagated 56Fe data uncertainty.
  •  
23.
  • Helgesson, Petter, 1986-, et al. (författare)
  • Uncertainty driven nuclear data evaluation including thermal (n,alpha) applied to Ni-59
  • 2017
  • Ingår i: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 145, s. 1-24
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a novel approach to the evaluation of nuclear data (ND), combining experimental data for thermalcross sections with resonance parameters and nuclear reaction modeling. The method involves sampling of variousuncertain parameters, in particular uncertain components in experimental setups, and provides extensive covarianceinformation, including consistent cross-channel correlations over the whole energy spectrum. The method is developed for, and applied to, Ni-59, but may be used as a whole, or in part, for other nuclides. Ni-59 is particularly interesting since a substantial amount of Ni-59 is produced in thermal nuclear reactors by neutron capture in Ni-58 and since it has a non-threshold (n,α) cross section. Therefore, Ni-59 gives a very important contribution to the helium production in stainless steel in a thermal reactor. However, current evaluated ND libraries contain old information for Ni-59, without any uncertainty information. The work includes a study of thermal cross section experiments and a novel combination of this experimental information, giving the full multivariate distribution of the thermal cross sections. In particular, the thermal (n,α) cross section is found to be (12.7 ± .7) b. This is consistent with, but yet different from, current established values. Further, the distribution of thermal cross sections is combined with reported resonance parameters, and with TENDL-2015 data, to provide full random ENDF files; all this is done in a novel way, keeping uncertainties and correlations in mind. The random files are also condensed into one single ENDF file with covariance information, which is now part ofa beta version of JEFF 3.3.Finally, the random ENDF files have been processed and used in an MCNP model to study the helium productionin stainless steel. The increase in the (n,α) rate due to Ni-59 compared to fresh stainless steel is found to be a factor of 5.2 at a certain time in the reactor vessel, with a relative uncertainty due to the Ni-59 data of 5.4 %.
  •  
24.
  • Helgesson, Petter, 1986-, et al. (författare)
  • UO-2 Versus MOX: Propagated Nuclear Data Uncertainty for k-eff, with Burnup
  • 2014
  • Ingår i: Nuclear science and engineering. - 0029-5639 .- 1943-748X. ; 177:3, s. 321-336
  • Tidskriftsartikel (refereegranskat)abstract
    • Precise assessment of propagated nuclear data uncertainties in integral reactor quantities is necessary for the development of new reactors as well as for modified use, e.g. when replacing UO-2 fuel by MOX fuel in conventional thermal reactors.This paper compares UO-2 fuel to two types of MOX fuel with respect to propagated nuclear data uncertainty, primarily in k-eff, by applying the Fast Total Monte Carlo method (Fast TMC) to a typical PWR pin cell model in Serpent, including burnup. An extensive amount of nuclear data is taken into account, including transport and activation data for 105 isotopes, fission yields for 13 actinides and thermal scattering data for H in H2O.There is indeed a significant difference in propagated nuclear data uncertainty in k-eff; at 0 burnup the uncertainty is 0.6 % for UO-2 and about 1 % for the MOX fuels. The difference decreases with burnup. Uncertainties in fissile fuel isotopes and thermal scattering are the most important for the difference and the reasons for this are understood and explained.This work thus suggests that there can be an important difference between UO-2 and MOX for the determination of uncertainty margins. However, the effects of the simplified model are difficult to overview; uncertainties should be propagated in more complicated models of any considered system. Fast TMC however allows for this without adding much computational time.
  •  
25.
  • Jansson, Peter, et al. (författare)
  • Blind Benchmark Exercise for Spent Nuclear Fuel Decay Heat
  • 2022
  • Ingår i: Nuclear science and engineering. - : Informa UK Limited. - 0029-5639 .- 1943-748X. ; 196:9, s. 1125-1145
  • Tidskriftsartikel (refereegranskat)abstract
    • The decay heat rate of five spent nuclear fuel assemblies of the pressurized water reactor type were measured by calorimetry at the interim storage for spent nuclear fuel in Sweden. Calculations of the decay heat rate of the five assemblies were performed by 20 organizations using different codes and nuclear data libraries resulting in 31 results for each assembly, spanning most of the current state-of-the-art practice. The calculations were based on a selected subset of information, such as reactor operating history and fuel assembly properties. The relative difference between the measured and average calculated decay heat rate ranged from 0.6% to 3.3% for the five assemblies. The standard deviation of these relative differences ranged from 1.9% to 2.4%.
  •  
26.
  • Leray, Olivier, et al. (författare)
  • Fission yield covariances for JEFF : A Bayesian Monte Carlo method
  • 2017
  • Ingår i: ND 2016. - Les Ulis : EDP Sciences. - 9782759890200
  • Konferensbidrag (refereegranskat)abstract
    • The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties) and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
  •  
27.
  • Oberstedt, Andreas, et al. (författare)
  • Energy degrader technique for light-charged particle spectroscopy at LOHENGRIN
  • 2008
  • Ingår i: Seminar on Fission VI, Corsendonk Priory, Belgium, September 18-21, 2007. - : WORLD SCIENTIFIC. - 9789812791061 ; , s. 99-106
  • Konferensbidrag (refereegranskat)abstract
    • Although the recoil mass-separator LOHENGRIN at Institute Laue-Langevin was originally designed for the spectrometry of binary fission fragments, it was also used in the past for measuring light-charged particles from ternary fission. However, due to limited electric field settings the energy distribution of the lightest particles was not completely accessible. In this contribution we report on an energy degrader technique that allows the measurement of the entire energy spectra of ternary particles with LOHENGRIN. We demonstrate how the measured particle spectra are distorted by the energy degrader and present results from a Monte Carlo simulation that shows how the original energy distributions are reconstructed. Finally, we apply this procedure to experimental data of ternary particles from the reaction 235U(nth, f).
  •  
28.
  • Oberstedt, Andreas, et al. (författare)
  • Energy degrader technique for light-charged particle spectroscopy at LOHENGRIN
  • 2008
  • Ingår i: ND-2007 International Conference on Nuclear Data for Science and Technology, Nice, France. - Les Ulis, France : EDP Sciences. - 9782759800902 - 9782759800919 ; , s. 379-382
  • Konferensbidrag (refereegranskat)abstract
    • The recoil mass-separator LOHENGRIN at Institute Laue-Langevin was originally designed for the spectrometry of binary fission fragments. Nevertheless, it was also used in the past for measuring light-charged particles from ternary fission. However, due to the electric field settings the energy distribution of the lightest particles was not completely accessible, which made the determination of mean kinetic energies, widths and, hence, emission yields difficult. In this paper we present an energy degrader technique that allows for the measurement of the entire energy spectrum of even the lightest ternary particles with LOHENGRIN.
  •  
29.
  • Oberstedt, Stephan, et al. (författare)
  • Light charged particle emission in the reaction 251Cf(nth, f)
  • 2005
  • Ingår i: Nuclear Physics A. - : Elsevier BV. - 0375-9474 .- 1873-1554. ; 761:3-4, s. 173-189
  • Tidskriftsartikel (refereegranskat)abstract
    • High resolution measurements of light charged particles (LCP) emitted in thermal neutron-induced fission of 252Cf ∗ (E=6.2 MeV) have been performed with the recoil mass-separator LOHENGRIN. For this compound nuclear system emission yields of LCPs, their mean kinetic energies and widths have been obtained for 8 isotopes with nuclear charges Z⩾2. For 13 further isotopes the emission yields were estimated on the basis of systematics on their kinetic energy distributions. 34Al and 36Si emission has been observed for the first time in thermal neutron-induced fission.
  •  
30.
  • Rochman, Dimitri, et al. (författare)
  • Efficient use of Monte Carlo : Uncertainty Propagation
  • 2014
  • Ingår i: Nuclear science and engineering. - 0029-5639 .- 1943-748X. ; 177:3, s. 337-349
  • Tidskriftsartikel (refereegranskat)abstract
    • A new and faster Total Monte Carlo method for the propagation of nuclear data uncertaintiesin Monte Carlo nuclear simulations is presented (the fast TMC method).It is addressing the main drawback of the original Total Monte Carlo method(TMC), namely the necessary large time multiplication factor compared to a singlecalculation. With this new method, Monte Carlo simulations can now be accompaniedwith uncertainty propagation (other than statistical), with small additionalcalculation time. The fast TMC method is presented and compared with the TMCand fast GRS methods for criticality and shielding benchmarks and burn-up calculations.Finally, to demonstrate the efficiency of the method, uncertainties on localdeposited power in 12.7 millions cells are calculated for a full size reactor core,
  •  
31.
  • Rochman, Dimitri, et al. (författare)
  • The TENDL library : Hope, reality and future
  • 2017
  • Ingår i: Nd 2016 Bruges. - Les Ulis : EDP Sciences. - 9782759890200
  • Konferensbidrag (refereegranskat)abstract
    • The TALYS Evaluated Nuclear Data Library (TENDL) has now 8 releases since 2008. Considerable experience has been acquired for the production of such general-purpose nuclear data library based on the feedback from users, evaluators and processing experts. The backbone of this achievement is simple and robust: completeness, quality and reproducibility. If TENDL is extensively used in many fields of applications, it is necessary to understand its strong points and remaining weaknesses. Alternatively, the essential knowledge is not the TENDL library itself, but rather the necessary method and tools, making the library a side product and focusing the efforts on the evaluation knowledge. The future of such approach will be discussed with the hope of nearby greater success.
  •  
32.
  •  
33.
  • Siefman, Daniel, et al. (författare)
  • Data assimilation of post-irradiation examination data for fission yields from GEF
  • 2020
  • Ingår i: EPJ NUCLEAR SCIENCES & TECHNOLOGIES. - : EDP SCIENCES S A. - 2491-9292. ; 6
  • Tidskriftsartikel (refereegranskat)abstract
    • Nuclear data, especially fission yields, create uncertainties in the predicted concentrations of fission products in spent fuel which can exceed engineering target accuracies. Herein, we present a new framework that extends data assimilation methods to burnup simulations by using post-irradiation examination experiments. The adjusted fission yields lowered the bias and reduced the uncertainty of the simulations. Our approach adjusts the model parameters of the code GEF. We compare the BFMC and MOCABA approaches to data assimilation, focusing especially on the effects of the non-normality of GEF's fission yields. In the application that we present, the best data assimilation framework decreased the average bias of the simulations from 26% to 14%. The average relative standard deviation decreased from 21% to 14%. The GEF fission yields after data assimilation agreed better with those in JEFF3.3. For Pu-239 thermal fission, the average relative difference from JEFF3.3 was 16% before data assimilation and after it was 12%. For the standard deviations of the fission yields, GEF's were 100% larger than JEFF3.3's before data assimilation and after were only 4% larger. The inconsistency of the integral data had an important effect on MOCABA, as shown with the Marginal Likelihood Optimization method. When the method was not applied, MOCABA's adjusted fission yields worsened the bias of the simulations by 30%. BFMC showed that it inherently accounted for this inconsistency. Applying Marginal Likelihood Optimization with BFMC gave a 2% lower bias compared to not applying it, but the results were more poorly converged.
  •  
34.
  •  
35.
  • Sjöstrand, Henrik, 1978-, et al. (författare)
  • Efficient use of Monte Carlo : The Fast Correlation Coefficient
  • 2018
  • Ingår i: EPJ N - Nuclear Sciences and Technologies. - : EDP Sciences. - 2491-9292. ; 4
  • Tidskriftsartikel (refereegranskat)abstract
    • Random sampling methods are used for nuclear data (ND) uncertainty propagation, often in combination with the use of Monte Carlo codes (e.g., MCNP). One example is the Total Monte Carlo (TMC) method. The standard way to visualize and interpret ND covariances is by the use of the Pearson correlation coefficient, rho = cov(x, y)/sigma(x) x sigma(y), where x or y can be any parameter dependent on ND. The spread in the output, sigma, has both an ND component, sigma(ND), and a statistical component, sigma(stat). The contribution from sigma(stat) decreases the value of rho, and hence it underestimates the impact of the correlation. One way to address this is to minimize sigma(stat) by using longer simulation run-times. Alternatively, as proposed here, a so-called fast correlation coefficient is used, rho(fast) = cov (x, y)-cov (x(stat), y(stat))/root sigma(2)(x)-sigma(2)(x,stat).root sigma(2)(y)-sigma(2)(y,stat) .In many cases, cov (x(stat), y(stat)) can be assumed to be zero. The paper explores three examples, a synthetic data study, correlations in the NRG High Flux Reactor spectrum, and the correlations between integral criticality experiments. It is concluded that the use of rho underestimates the correlation. The impact of the use of rho(fast) is quantified, and the implication of the results is discussed.
  •  
36.
  • Sjöstrand, Henrik, 1978-, et al. (författare)
  • Integral adjustment of nuclear data libraries : finding unrecognized systematic uncertainties and correlations
  • 2019
  • Ingår i: Conference program & Abstract book. ; , s. 212-212
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • To reduce the uncertainties and obtain a better predictive power, integral adjustment of nuclear data libraries is one powerful option. Databases with integral experiments, such as the ICSBEP contain a large amount of data. When adjusting nuclear data using these integral experiments, it is important to not only include reported experimental uncertainties but also to account for the possibility of unreported experimental uncertainties and correlations between experiments, and calculation uncertainties. Unreported uncertainties and correlations can be identified and possibly quantified using marginal likelihood optimization (MLO). MLO has previously been tested for integral adjustment. In this paper, a method for including more information from the full likelihood space is pursued. It is shown that MLO can be an effective tool in addressing unknown uncertainties and correlations for a selected number of integral experiments. Results in terms of obtained parameter estimates as well as of posterior uncertainties and correlations are reported. The results are validated against an independent set of integral experiments. The findings are important for large-scale ND evaluations that heavily rely on automatization, such as TENDL, but also for any integral adjustment where a complete knowledge of all uncertainty components is out of reach. The authors believe that this is always the case.
  •  
37.
  • Sjöstrand, Henrik, et al. (författare)
  • Propagation of nuclear data uncertainties for ELECTRA burn-up calculations
  • 2014
  • Ingår i: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 118, s. 527-530
  • Tidskriftsartikel (refereegranskat)abstract
    • The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low -power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in 239Pu transport data to uncertainties in the fuel inventory of ELECTRA during the reactor life using the Total Monte Carlo approach(TMC). Within the TENDL project the nuclear models input parameters were randomized within their uncertainties and 740 239Pu nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty in the long-term radiotoxicity, decay heat, gas pressureand volatile fission products were found to be insignificant. However, the uncertainty of some minor actinides were observed to be rather large and therefore their impact on multiple recycling should be investigated further. It was also found that, criticality benchmarks can be used to reduce inventory uncertainties due to nuclear data. Further studies are needed to include fission yield uncertainties, more isotopes, and a larger set of benchmarks.
  •  
38.
  • Sjöstrand, Henrik, 1978-, et al. (författare)
  • Propagation Of Nuclear Data Uncertainties For Fusion Power Measurements
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • Fusion plasmas produce neutrons and by measuring the neutron emission the fusion power can be inferred. Accurate neutron yield measurements are paramount for the safe and efficient operation of fusion experiments and, eventually, fusion power plants.Neutron measurements are an essential part of the diagnostic system at large fusion machines such as JET and ITER. At JET, a system of activation foils provides the absolute calibration for the neutron yield determination.  The activation system uses the property of certain nuclei to emit radiation after being excited by neutron reactions. A sample of suitable nuclei is placed in the neutron flux close to the plasma and after irradiation the induced radiation is measured.  Knowing the neutron activation cross section one can calculate the time-integrated neutron flux at the sample position. To relate the local flux to the total neutron yield, the spatial flux response has to be identified. This describes how the local neutron emission affects the flux at the detector.  The required spatial flux response is commonly determined using neutron transport codes, e.g., MCNP.Nuclear data is used as input both in the calculation of the spatial flux response and when the flux at the irradiation site is inferred. Consequently, high quality nuclear data is essential for the proper determination of the neutron yield and fusion power.  However, uncertainties due to nuclear data are generally not fully taken into account in today’s uncertainty analysis for neutron yield calibrations using activation foils.  In this paper, the neutron yield uncertainty due to nuclear data is investigated using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of JET fusion machine.  In this work the uncertainties due to the cross sections and angular distributions in JET structural materials, as well as the activation cross sections, are analyzed. It is shown that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.
  •  
39.
  • Sjöstrand, Henrik, 1978-, et al. (författare)
  • Propagation of nuclear data uncertainties for fusion power measurements
  • 2017
  • Ingår i: ND 2016. - Les Ulis : EDP Sciences. - 9782759890200
  • Konferensbidrag (refereegranskat)abstract
    • Neutron measurements using neutron activation systems are an essential part of the diagnostic system at large fusion machines such as JET and ITER. Nuclear data is used to infer the neutron yield. Consequently, high-quality nuclear data is essential for the proper determination of the neutron yield and fusion power. However, uncertainties due to nuclear data are not fully taken into account in uncertainty analysis for neutron yield calibrations using activation foils. This paper investigates the neutron yield uncertainty due to nuclear data using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of the JET fusion machine; the uncertainties due to the cross-sections and angular distributions in JET structural materials, as well as the activation cross-sections in the activation foils, are analysed. It is found that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.
  •  
40.
  • Sjöstrand, Henrik, et al. (författare)
  • Total Monte Carlo evaluation for dose calculations
  • 2014
  • Ingår i: Radiation Protection Dosimetry. - : Oxford University Press (OUP). - 0144-8420 .- 1742-3406. ; 161:1-4, s. 312-315
  • Tidskriftsartikel (refereegranskat)abstract
    • Total Monte Carlo (TMC) is a method to propagate nuclear data (ND) uncertainties in transport codes, by using a large set of ND files, which covers the ND uncertainty. The transport code is run multiple times, each time with a unique ND file, and the result is a distribution of the investigated parameter, e.g. dose, where the width of the distribution is interpreted as the uncertainty due to ND. Until recently, this was computer intensive, but with a new development, fast TMC, more applications are accessible. The aim of this work is to test the fast TMC methodology on a dosimetry application and to propagate the 56Fe uncertainties on the predictions of the dose outside a proposed 14-MeV neutron facility. The uncertainty was found to be 4.2 %. This can be considered small; however, this cannot be generalised to all dosimetry applications and so ND uncertainties should routinely be included in most dosimetry modelling.
  •  
41.
  •  
42.
  • Solans, Virginie, et al. (författare)
  • Optimisation of used nuclear fuel canister loading using a neural network and genetic algorithm
  • 2021
  • Ingår i: Neural Computing & Applications. - : Springer Nature. - 0941-0643 .- 1433-3058. ; 33:23, s. 16627-16639
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents an approach for the optimisation of geological disposal canister loadings, combining high resolution simulations of used nuclear fuel characteristics with an articial neural network and a genetic algorithm. The used nuclear fuels (produced in an open fuel cycle without reprocessing) considered in this work come from a Swiss Pressurised Water Reactor, taking into account their realistic lifetime in the reactor core and cooling periods, up to their disposal in the final geological repository. The case of 212 representative used nuclear fuel assemblies is analysed, assuming a loading of 4 fuel assemblies per canister, and optimizing two safety parameters: the fuel decay heat (DH) and the canister effective neutron multiplication factor keff. In the present approach, a neural network is trained as a surrogate model to evaluate the keff value to substitute the time-consuming-code Monte Carlo transport & depletion SERPENT for specific canister loading calculations. A genetic algorithm is then developed to optimise simultaneously the canister keff and DH values. The keff computed during the optimisation algorithm is using the previously developed artificial neural network. The optimisation algorithm allows (1) to minimize the number of canisters, given assumed limits for both DH and keff quantities and (2) to minimize DH and keff differences among canisters. This study represents a proof-of-principle of the neural network and genetic algorithm capabilities, and will be applied in the future to a larger number of cases.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-42 av 42
Typ av publikation
tidskriftsartikel (19)
konferensbidrag (17)
annan publikation (3)
licentiatavhandling (2)
doktorsavhandling (1)
Typ av innehåll
refereegranskat (29)
övrigt vetenskapligt/konstnärligt (13)
Författare/redaktör
Rochman, Dimitri (38)
Pomp, Stephan (18)
Alhassan, Erwin (17)
Sjöstrand, Henrik (17)
Sjöstrand, Henrik, 1 ... (16)
Helgesson, Petter, 1 ... (16)
visa fler...
Österlund, Michael (11)
Koning, Arjan (9)
Koning, Arjan J. (7)
J. Koning, Arjan (6)
Oberstedt, Stephan (5)
Duan, Junfeng (5)
Oberstedt, Andreas (5)
Schnabel, Georg (5)
Helgesson, Petter (4)
Dimitri, Rochman (4)
Conroy, Sean (3)
Hambsch, Franz-Josef (3)
Gustavsson, Cecilia (3)
Vasiliev, Alexander (3)
Ferroukhi, Hakim (3)
Arjan, J. Koning (3)
Rydén, Jesper (3)
Pomp, Stephan, 1968- (2)
Hursin, Mathieu (2)
Birgersson, Evert (2)
Tsekhanovich, Igor (2)
Raman, Subramanian (2)
Vasiliev, A. (1)
Ros, Linus (1)
Bengtsson, Martin (1)
Ferroukhi, H. (1)
Demaziere, Christoph ... (1)
Alhassan, Erwin, 198 ... (1)
Cabellos, Oscar, Dr. (1)
Dzysiuk, N. (1)
Goriely, S. (1)
Hellesen, Carl (1)
Becker, Julia (1)
Bejmer, Klaes-Håkan (1)
Bäckström, Ulrika (1)
Sjöland, Anders (1)
Tsekhanovitsch, Igor (1)
Bauge, E (1)
Jansson, Peter (1)
Fleming, M. (1)
Schillebeeckx, Peter (1)
Simakov, Stanislav (1)
Fiorito, Luca (1)
Sauvan, Patrick (1)
visa färre...
Lärosäte
Uppsala universitet (37)
Örebro universitet (5)
Lunds universitet (1)
Språk
Engelska (42)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (41)
Teknik (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy