SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Helgesson Petter) "

Search: WFRF:(Helgesson Petter)

  • Result 1-50 of 57
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Al-Adili, Ali, et al. (author)
  • Fission Activities of the Nuclear Reactions Group in Uppsala
  • 2015
  • In: Scientific Workshop on Nuclear Fission Dynamics and the Emission of Prompt Neutrons and Gamma Rays, THEORY-3. - : Elsevier BV. ; , s. 145-149
  • Conference paper (peer-reviewed)abstract
    • This paper highlights some of the main activities related to fission of the nuclear reactions group at Uppsala University. The group is involved for instance in fission yield experiments at the IGISOL facility, cross-section measurements at the NFS facility, as well as fission dynamics studies at the IRMM JRC-EC. Moreover, work is ongoing on the Total Monte Carlo (TMC) methodology and on including the GEF fission code into the TALYS nuclear reaction code. Selected results from these projects are discussed.
  •  
2.
  • Alhassan, Erwin, et al. (author)
  • Benchmark selection methodology for reactor calculations and nuclear data uncertainty reduction
  • 2015
  • In: Annals of Nuclear Energy. - 0306-4549 .- 1873-2100.
  • Journal article (peer-reviewed)abstract
    • Criticality, reactor physics and shielding benchmarks are expected to play important roles in GEN-IV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used for validating computer codes and for testing nuclear data libraries. Given the large number of benchmarks available, selecting these benchmarks for specic applications can be rather tedious and difficult. Until recently, the selection process has been based usually on expert judgement which is dependent on the expertise and the experience of the user and there by introducing a user bias into the process. This approach is also not suitable for the Total Monte Carlo methodology which lays strong emphasis on automation, reproducibility and quality assurance. In this paper a method for selecting these benchmarks for reactor calculation and for nuclear data uncertainty reduction based on the Total Monte Carlo (TMC) method is presented. For reactor code validation purposes, similarities between a real reactor application and one or several benchmarks are quantied using a similarity index while the Pearson correlation coecient is used to select benchmarks for nuclear data uncertainty reduction. Also, a correlation based sensitivity method is used to identify the sensitivity of benchmarks to particular nuclear reactions. Based on the benchmark selection methodology, two approaches are presented for reducing nuclear data uncertainty using integral benchmark experiments as an additional constraint in the TMC method: a binary accept/reject and a method of assigning file weights using the likelihood function. Finally, the methods are applied to a full lead-cooled fast reactor core and a set of criticality benchmarks. Signicant reductions in Pu-239 and Pb-208 nuclear data uncertainties were obtained after implementing the two methods with some benchmarks.
  •  
3.
  • Alhassan, Erwin, et al. (author)
  • On the use of integral experiments for uncertainty reduction of reactor macroscopic parameters within the TMC methodology
  • 2016
  • In: Progress in nuclear energy (New series). - : Elsevier BV. - 0149-1970 .- 1878-4224. ; 88, s. 43-52
  • Journal article (peer-reviewed)abstract
    • The current nuclear data uncertainties observed in reactor safety parameters for some nuclides call for safety concerns especially with respect to the design of GEN-IV reactors and must therefore be reduced significantly. In this work, uncertainty reduction using criticality benchmark experiments within the Total Monte Carlo methodology is presented. Random nuclear data libraries generated are processed and used to analyze a set of criticality benchmarks. Since the calculated results for each random nuclear data used are different, an algorithm was used to select (or assign weights to) the libraries which give a good description of experimental data for the analyses of the benchmarks. The selected or weighted libraries were then used to analyze the ELECTRA reactor. By using random nuclear data libraries constrained with only differential experimental data as our prior, the uncertainties observed were further reduced by constraining the files with integral experimental data to obtain a posteriori uncertainties on the k(eff). Two approaches are presented and compared: a binary accept/reject and a method of assigning file weights based on the likelihood function. Significant reductions in (PU)-P-239 and Pb-208 nuclear data uncertainties in the k(eff) were observed after implementing the two methods with some criticality benchmarks for the ELELIRA reactor. (C) 2015 Elsevier Ltd. All rights reserved.
  •  
4.
  • Alhassan, Erwin, et al. (author)
  • Reducing A Priori 239Pu Nuclear Data Uncertainty In The Keff Using A Set Of Criticality Benchmarks With Different Nuclear Data Libraries
  • 2015
  • Conference paper (other academic/artistic)abstract
    • In the Total Monte Carlo (TMC) method [1] developed at the Nuclear Research and Consultancy Group for nuclear data uncertainty propagation, model calculations are compared with differential experimental data and a specific a priori uncertainty is assigned to each model parameter. By varying the model parameters all together within model parameter uncertainties, a full covariance matrix is obtained with its off diagonal elements if desired [1]. In this way, differential experimental data serve as a constraint for the model parameters used in the TALYS nuclear reactions code for the production of random nuclear data files. These files are processed into usable formats and used in transport codes for reactor calculations and for uncertainty propagation to reactor macroscopic parameters of interest. Even though differential experimental data together with their uncertainties are included (implicitly) in the production of these random nuclear data files in the TMC method, wide spreads in parameter distributions have been observed, leading to large uncertainties in reactor parameters for some nuclides for the European Lead cooled Training Reactor [2]. Due to safety concerns and the development of GEN-IV reactors with their challenging technological goals, the present uncertainties should be reduced significantly if the benefits from advances in modelling and simulations are to be utilized fully [3]. In Ref.[4], a binary accept/reject approach and a more rigorous method of assigning file weights based on the likelihood function were proposed and presented for reducing nuclear data uncertainties using a set of integral benchmarks obtained from the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP). These methods are depended on the reference nuclear data library used, the combined benchmark uncertainty and the relevance of each benchmark for reducing nuclear data uncertainties for a particular reactor system. Since each nuclear data library normally comes with its own nominal values and covariance matrices, reactor calculations and uncertainties computed with these libraries differ from library to library. In this work, we apply the binary accept/reject approach and the method of assigning file weights based on the likelihood function for reducing a priori 239Pu nuclear data uncertainties for the European Lead Cooled Training Reactor (ELECTRA) using a set of criticality benchmarks. Prior and posterior uncertainties computed for ELECTRA using ENDF/B-VII.1, JEFF-3.2 and JENDL-4.0 are compared after including experimental information from over 10 benchmarks.[1] A.J. Koning and D. Rochman, Modern Nuclear Data Evaluation with the TALYS Code System. Nuclear Data Sheets 113 (2012) 2841-2934. [2] E. Alhassan, H. Sjöstrand, P. Helgesson, A. J. Koning, M. Österlund, S. Pomp, D. Rochman, Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training reactor (ELECTRA). Annals of Nuclear Energy 75 (2015) 26-37. [3] G. Palmiotti, M. Salvatores, G. Aliberti, H. Hiruta, R. McKnight, P. Oblozinsky, W. Yang, A global approach to the physics validation of simulation codes for future nuclear systems, Annals of Nuclear Energy 36 (3) (2009) 355-361. [4] E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Österlund, D. Rochman, A.J. Koning, Selecting benchmarks for reactor calculations: In proc. PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future, kyoto, Japan, Sep. 28 - 3 Oct. (2014).
  •  
5.
  • Alhassan, Erwin, et al. (author)
  • Selecting benchmarks for reactor calculations
  • 2014
  • In: PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future.
  • Conference paper (peer-reviewed)abstract
    • Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and there by resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications based on Total Monte Carlo (TMC). Similarities betweenan application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose an approach for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain on nuclear reaction models: a binary accept/reject criterion. Finally, the method was applied to a full Lead Fast Reactor core and a set of criticality benchmarks.
  •  
6.
  • Alhassan, Erwin, et al. (author)
  • Selecting benchmarks for reactor simulations : an application to a Lead Fast Reactor
  • 2016
  • In: Annals of Nuclear Energy. - : Elsevier BV. - 0306-4549 .- 1873-2100. ; 96, s. 158-169
  • Journal article (peer-reviewed)abstract
    • For several decades reactor design has been supported by computer codes for the investigation of reactor behavior under both steady state and transient conditions. The use of computer codes to simulate reactor behavior enables the investigation of various safety scenarios saving time and cost. There has been an increase in the development of in-house (local) codes by various research groups in recent times for preliminary design of specific or targeted nuclear reactor applications. These codes must be validated and calibrated against experimental benchmark data with their evolution and improvements. Given the large number of benchmarks available, selecting these benchmarks for reactor calculations and validation of simulation codes for specific or target applications can be rather tedious and difficult. In the past, the traditional approach based on expert judgement using information provided in various handbooks, has been used for the selection of these benchmarks. This approach has been criticized because it introduces a user bias into the selection process. This paper presents a method for selecting these benchmarks for reactor calculations for specific reactor applications based on the Total Monte Carlo (TMC) method. First, nuclear model parameters are randomly sampled within a given probability distribution and a large set of random nuclear data files are produced using the TALYS code system. These files are processed and used to analyze a target reactor system and a set of criticality benchmarks. Similarity between the target reactor system and one or several benchmarks is quantified using a similarity index. The method has been applied to the European Lead Cooled Reactor (ELECTRA) and a set of plutonium and lead sensitive criticality benchmarks using the effective multiplication factor (keffkeff). From the study, strong similarity were observed in the keffkeff between ELECTRA and some plutonium and lead sensitive criticality benchmarks. Also, for validation purposes, simulation results for a list of selected criticality benchmarks simulated with the MCNPX and SERPENT codes using different nuclear data libraries have been compared with experimentally measured benchmark keff values.
  •  
7.
  • Alhassan, Erwin, et al. (author)
  • Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training Reactor
  • 2015
  • In: Annals of Nuclear Energy. - : Elsevier BV. - 0306-4549 .- 1873-2100. ; 75, s. 26-37
  • Journal article (peer-reviewed)abstract
    • The Total Monte Carlo (TMC) method was used in this study to assess the impact of Pb-204, 206, 207, 208 nuclear data uncertainties on reactor safety parameters for the ELECTRA reactor. Relatively large uncertainties were observed in the k-eff and the coolant void worth (CVW) for all isotopes except for Pb-204 with signicant contribution coming from Pb-208 nuclear data; the dominant eectcame from uncertainties in the resonance parameters; however, elastic scattering cross section and the angular distributions also had signicant impact. It was also observed that the k-eff distribution for Pb-206, 207, 208 deviates from a Gaussian distribution with tails in the high k-eff region. An uncertainty of 0.9% on the k-eff and 3.3% for the CVW due to lead nuclear data were obtained. As part of the work, cross section-reactor parameter correlations were also studied using a Monte Carlo sensitivity method. Strong correlations were observed between the k-eff and (n,el) cross section for all the lead isotopes. The correlation between the (n,inl) and the k-eff was also found to be signicant.
  •  
8.
  •  
9.
  • Amin, R., et al. (author)
  • Does country of resettlement influence the risk of suicide in refugees? : A case-control study in Sweden and Norway
  • 2021
  • In: Epidemiology and Psychiatric Sciences. - : Cambridge University Press. - 2045-7960 .- 2045-7979. ; 30
  • Journal article (peer-reviewed)abstract
    • Aims Little is known regarding how the risk of suicide in refugees relates to their host country. Specifically, to what extent inter-country differences in structural factors between the host countries may explain the association between refugee status and subsequent suicide is lacking in previous literature. We aimed to investigate (1) the risk of suicide in refugees resident in Sweden and Norway, in general, and according to their sex, age, region/country of birth and duration of residence, compared with the risk of suicide in the respective majority host population; (2) if factors related to socio-demographics, labour market marginalisation (LMM) and healthcare use might explain the risk of suicide in refugees differently in host countries. Methods Using a nested case-control design, each case who died by suicide between the age of 18 and 64 years during 1998 and 2018 (17 572 and 9443 cases in Sweden and Norway, respectively) was matched with up to 20 controls from the general population, by sex and age. Multivariate-adjusted conditional logistic regression models yielding adjusted odds ratios (aORs) with 95% confidence intervals (95% CI) were used to test the association between refugee status and suicide. Separate models were controlled for factors related to socio-demographics, previous LMM and healthcare use. Analyses were also stratified by sex and age groups, by refugees' region/country of birth and duration of residence in the host country. Results The aORs for suicide in refugees in Sweden and Norway were 0.5 (95% CI 0.5-0.6) and 0.3 (95% CI 0.3-0.4), compared with the Swedish-born and Norwegian-born individuals, respectively. Stratification by region/country of birth showed similar statistically significant lower odds for most refugee groups in both host countries except for refugees from Eritrea (aOR 1.0, 95% CI 0.7-1.6) in Sweden. The risk of suicide did not vary much across refugee groups by their duration of residence, sex and age except for younger refugees aged 18-24 who did not have a statistically significant relative difference in suicide risk than their respective host country peers. Factors related to socio-demographics, LMM and healthcare use had only a marginal influence on the studied associations in both countries. Conclusions Refugees in Sweden and Norway had almost similar suicide mortality advantages compared with the Swedish-born and Norwegian-born population, respectively. These findings may suggest that resiliency and culture/religion-bound attitudes towards suicidal behaviour in refugees could be more influential for their suicide risk after resettlement than other post-migration environmental and structural factors in the host country.
  •  
10.
  • Amin, Ridwanul, et al. (author)
  • Healthcare use before and after suicide attempt in refugees and Swedish-born individuals
  • 2021
  • In: Social Psychiatry and Psychiatric Epidemiology. - : Springer. - 0933-7954 .- 1433-9285. ; 56:2, s. 325-338
  • Journal article (peer-reviewed)abstract
    • PURPOSE: There is a lack of research on whether healthcare use before and after a suicide attempt differs between refugees and the host population. We aimed to investigate if the patterns of specialised (inpatient and specialised outpatient) psychiatric and somatic healthcare use, 3 years before and after a suicide attempt, differ between refugees and the Swedish-born individuals in Sweden. Additionally, we aimed to explore if specialised healthcare use differed among refugee suicide attempters according to their sex, age, education or receipt of disability pension.METHODS: All refugees and Swedish-born individuals, 20-64 years of age, treated for suicide attempt in specialised healthcare during 2004-2013 (n = 85,771 suicide attempters, of which 4.5% refugees) were followed 3 years before and after (Y - 3 to Y + 3) the index suicide attempt (t0) regarding their specialised healthcare use. Annual adjusted prevalence with 95% confidence intervals (CIs) of specialised healthcare use were assessed by generalized estimating equations (GEE). Additionally, in analyses among the refugees, GEE models were stratified by sex, age, educational level and disability pension.RESULTS: Compared to Swedish-born, refugees had lower prevalence rates of psychiatric and somatic healthcare use during the observation period. During Y + 1, 25% (95% CI 23-28%) refugees and 30% (95% CI 29-30%) Swedish-born used inpatient psychiatric healthcare. Among refugees, a higher specialised healthcare use was observed in disability pension recipients than non-recipients.CONCLUSION: Refugees used less specialised healthcare, before and after a suicide attempt, relative to the Swedish-born. Strengthened cultural competence among healthcare professionals and better health literacy among the refugees may improve healthcare access in refugees.
  •  
11.
  • Amin, Ridwanul, et al. (author)
  • Suicide attempt and suicide in refugees in Sweden - a nationwide population-based cohort study
  • 2021
  • In: Psychological Medicine. - : Cambridge University Press. - 0033-2917 .- 1469-8978. ; 51:2, s. 254-263
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: Despite a reported high rate of mental disorders in refugees, scientific knowledge on their risk of suicide attempt and suicide is scarce. We aimed to investigate (1) the risk of suicide attempt and suicide in refugees in Sweden, according to their country of birth, compared with Swedish-born individuals and (2) to what extent time period effects, socio-demographics, labour market marginalisation (LMM) and morbidity explain these associations.METHODS: Three cohorts comprising the entire population of Sweden, 16-64 years at 31 December 1999, 2004 and 2009 (around 5 million each, of which 3.3-5.0% refugees), were followed for 4 years each through register linkage. Additionally, the 2004 cohort was followed for 9 years, to allow analyses by refugees' country of birth. Crude and multivariate hazard ratios (HRs) with 95% confidence intervals (CIs) were computed. The multivariate models were adjusted for socio-demographic, LMM and morbidity factors.RESULTS: In multivariate analyses, HRs regarding suicide attempt and suicide in refugees, compared with Swedish-born, ranged from 0.38-1.25 and 0.16-1.20 according to country of birth, respectively. Results were either non-significant or showed lower risks for refugees. Exceptions were refugees from Iran (HR 1.25; 95% CI 1.14-1.41) for suicide attempt. The risk for suicide attempt in refugees compared with the Swedish-born diminished slightly across time periods.CONCLUSIONS: Refugees seem to be protected from suicide attempt and suicide relative to Swedish-born, which calls for more studies to disentangle underlying risk and protective factors.
  •  
12.
  • Amin, Ridwanul, et al. (author)
  • Trajectories of antidepressant use before and after a suicide attempt among refugees and Swedish-born individuals : a cohort study
  • 2021
  • In: International Journal for Equity in Health. - : BioMed Central. - 1475-9276. ; 20:1
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: To identify key information regarding potential treatment differences in refugees and the host population, we aimed to investigate patterns (trajectories) of antidepressant use during 3 years before and after a suicide attempt in refugees, compared with Swedish-born. Association of the identified trajectory groups with individual characteristics were also investigated.METHODS: All 20-64-years-old refugees and Swedish-born individuals having specialised healthcare for suicide attempt during 2009-2015 (n = 62,442, 5.6% refugees) were followed 3 years before and after the index attempt. Trajectories of annual defined daily doses (DDDs) of antidepressants were analysed using group-based trajectory models. Associations between the identified trajectory groups and different covariates were estimated by chi2-tests and multinomial logistic regression.RESULTS: Among the four identified trajectory groups, antidepressant use was constantly low (≤15 DDDs) for 64.9% of refugees. A 'low increasing' group comprised 5.9% of refugees (60-260 annual DDDs before and 510-685 DDDs after index attempt). Two other trajectory groups had constant use at medium (110-190 DDDs) and high (630-765 DDDs) levels (22.5 and 6.6% of refugees, respectively). Method of suicide attempt and any use of psychotropic drugs during the year before index attempt discriminated between refugees' trajectory groups. The patterns and composition of the trajectory groups and their association, discriminated with different covariates, were fairly similar among refugees and Swedish-born, with the exception of previous hypnotic and sedative drug use being more important in refugees.CONCLUSIONS: Despite previous reports on refugees being undertreated regarding psychiatric healthcare, no major differences in antidepressant treatment between refugees and Swedish-born suicide attempters were found.
  •  
13.
  •  
14.
  • Helgesson, Magnus, et al. (author)
  • Labour-market marginalisation after mental disorders among young natives and immigrants living in Sweden
  • 2017
  • In: BMC Public Health. - : BioMed Central. - 1471-2458. ; 17:1
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: The aim was to investigate the associations between mental disorders and three different measures of labour-market marginalisation, and differences between native Swedes and immigrants.METHODS: The study comprised 1,753,544 individuals, aged 20-35 years, and resident in Sweden 2004. They were followed 2005-2011 with regard to disability pension, sickness absence (≥90 days) and unemployment (≥180 days). Immigrants were born in Western countries (Nordic countries, EU, Europe outside EU or North-America/Oceania), or in non-Western countries (Africa, Asia or South-America). Mental disorders were grouped into seven subgroups based on a record of in- or specialised outpatient health care 2001-2004. Hazard ratios (HRs) with 95% confidence intervals (CIs) were computed by Cox regression models with both fixed and time-dependent covariates and competing risks. We also performed stratified analyses with regard to labour-market attachment.RESULTS: Individuals with mental disorders had a seven times higher risk of disability pension, a two times higher risk of sickness absence, and a 20% higher risk of unemployment than individuals without mental disorders. Individuals with personality disorders and schizophrenia/non-affective psychoses had highest risk estimates for having disability pension and long-term sickness absence, while the risk estimates of long-term unemployment were similar among all subgroups of mental disorders. Among persons with mental disorders, native Swedes had higher risk estimates for disability pension (HR:6.6; 95%CI:6.4-6.8) than Western immigrants (4.8; 4.4-5.2) and non-Western immigrants (4.8; 4.4-5.1), slightly higher risk estimates for sickness absence (2.1;2.1-2.2) than Western (1.9;1.8-2.1), and non-Western (1.9;1.7-2.0) immigrants but lower risk estimates for unemployment (1.4;1.3-1.4) than Western (1.8;1.7-1.9) and non-Western immigrants (2.0;1.9-2.1). There were similar risk estimates among sub-regions within both Western and non-Western countries. Stratification by labour-market attachment showed that the risk estimates for immigrants were lower the more distant individuals were from gainful employment.CONCLUSIONS: Mental disorders were associated with all three measures of labour-market marginalisation, strongest with subsequent disability pension. Native Swedes had higher risk estimates for both disability pension and sickness absence, but lower risk estimates for unemployment than immigrants. Previous labour-market attachment explained a great part of the association between immigrant status and subsequent labour-market marginalisation.
  •  
15.
  • Helgesson, Magnus, et al. (author)
  • Trajectories of work disability and unemployment among young adults with common mental disorders
  • 2018
  • In: BMC Public Health. - : BioMed Central. - 1471-2458. ; 18
  • Journal article (peer-reviewed)abstract
    • BackgroundLabour-market marginalisation (LMM) and common mental disorders (CMDs) are serious societal problems. The aims were to describe trajectories of LMM (both work disability and unemployment) among young adults with and without CMDs, and to elucidate the characteristics associated with these trajectories.MethodsThe study was based on Swedish registers and consisted of all individuals 19-30years with an incident diagnosis of a CMD in year 2007 (n=7245), and a matched comparison group of individuals without mental disorders during the years 2004-07 (n=7245). Group-based trajectory models were used to describe patterns of LMM both before, and after the incident diagnosis of a CMD. Multinomial logistic regressions investigated the associations between sociodemographic and medical covariates and the identified trajectories.ResultsTwenty-six percent (n=1859) of young adults with CMDs followed trajectories of increasing or constant high levels of work disability, and 32 % (n=2302) followed trajectories of increasing or constant high unemployment. In the comparison group, just 9 % (n=665) followed increasing or constant high levels of work disability and 21 % (n=1528) followed trajectories of increasing or constant high levels of unemployment. A lower share of young adults with CMDs followed trajectories of constant low levels of work disability (n=4546, 63%) or unemployment (n=2745, 38%), compared to the level of constant low work disability (n=6158, 85%) and unemployment (n=3385, 50%) in the comparison group. Remaining trajectories were fluctuating or decreasing. Around 50% of young adults with CMDs had persistent levels of LMM at the end of follow-up. The multinomial logistic regression revealed that educational level and comorbid mental disorders discriminated trajectories of work disability, while educational level, living area and age determined differences in trajectories of unemployment (R-difference(2)=0.02-0.05, p<0.001).ConclusionsA large share, nearly 50%, of young adults with CMDs, substantially higher than in the comparison group of individuals without mental disorders, display increasing or high persistent levels of either work disability or unemployment throughout the follow-up period. Low educational level, comorbidity with other mental disorders and living in rural areas were factors that increased the probability for LMM.
  •  
16.
  • Helgesson, Petter, 1986- (author)
  • Approaching well-founded comprehensive nuclear data uncertainties : Fitting imperfect models to imperfect data
  • 2018
  • Doctoral thesis (other academic/artistic)abstract
    • Nuclear physics has a wide range of applications; e.g., low-carbon energy production, medical treatments, and non-proliferation of nuclear weapons. Nuclear data (ND) constitute necessary input to computations needed within all these applications.This thesis considers uncertainties in ND and their propagation to applications such as ma- terial damage in nuclear reactors. TENDL is today the most comprehensive library of evaluated ND (a combination of experimental ND and physical models), and it contains uncertainty estimates for all nuclides it contains; however, TENDL relies on an automatized process which, so far, includes a few practical remedies which are not statistically well-founded. A longterm goal of the thesis is to provide methods which make these comprehensive uncertainties well-founded. One of the main topics of the thesis is an automatic construction of experimental covariances; at first by attempting to complete the available uncertainty information using a set of simple rules. The thesis also investigates using the distribution of the data; this yields promising results, and the two approaches may be combined in future work.In one of the papers underlying the thesis, there are also manual analyses of experiments, for the thermal cross sections of Ni-59 (important for material damage). Based on this, uncertainty components in the experiments are sampled, resulting in a distribution of thermal cross sections. After being combined with other types of ND in a novel way, the distribution is propagated both to an application, and to an evaluated ND file, now part of the ND library JEFF 3.3.The thesis also compares a set of different techniques used to fit models in ND evaluation. For example, it is quantified how sensitive different techniques are to a model defect, i.e., the inability of the model to reproduce the truth underlying the data. All techniques are affected, but techniques fitting model parameters directly (such as the primary method used for TENDL) are more sensitive to model defects. There are also advantages with these methods, such as physical consistency and the possibility to build up a framework such as that of TENDL.The treatment of these model defects is another main topic of the thesis. To this end, two ways of using Gaussian processes (GPs) are studied, applied to quite different situations. First, the addition of a GP to the model is used to enable the fitting of arbitrarily shaped peaks in a histogram of data. This is shown to give a substantial improvement compared to if the peaks are assumed to be Gaussian (when they are not), both using synthetic and authentic data.The other approach uses GPs to fit smoothly energy-dependent model parameters in an ND evaluation context. Such an approach would be relatively easy to incorporate into the TENDL framework, and ensures a certain level of physical consistency. It is used on a TALYS-like model with synthetic data, and clearly outperforms fits without the energy-dependent model parameters, showing that the method can provide a viable route to improved ND evaluation. As a proof of concept, it is also used with authentic TALYS, and with authentic data.To conclude, the thesis takes significant steps towards well-founded comprehensive ND un- certainties.
  •  
17.
  • Helgesson, Petter, 1986-, et al. (author)
  • Assessment of Novel Techniques for Nuclear Data Evaluation
  • 2018
  • In: Reactor Dosimetry. - : ASTM International. - 9780803176614 ; , s. 105-116
  • Conference paper (peer-reviewed)abstract
    • The quality of evaluated nuclear data can be impacted by, e.g., the choice of the evaluation algorithm. The objective of this work is to compare the performance of the evaluation techniques GLS, GLS-P, UMC-G, and, UMC-B, by using synthetic data. In particular, the effects of model defects are investigated. For small model defects, UMC-B and GLS-P are found to perform best, while these techniques yield the worst results for a significantly defective model; in particular, they seriously underestimate the uncertainties. If UMC-B is augmented with Gaussian processes,it performs distinctly better for a defective model but is more susceptible to an inadequate experimental covariance estimate.
  •  
18.
  • Helgesson, Petter, 1986-, et al. (author)
  • Combining Total Monte Carlo and Unified Monte Carlo : Bayesian nuclear data uncertainty quantification from auto-generated experimental covariances
  • 2017
  • In: Progress in nuclear energy (New series). - : Elsevier. - 0149-1970 .- 1878-4224. ; 96, s. 76-96
  • Journal article (peer-reviewed)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how TMC and Unified Monte Carlo-B (UMC-B) are combined to include experimental data in TMC. Random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.In this implementation, the impact from the weighting is small for many of the applications. In some cases, this can be explained by the fact that the distributions used as priors are too narrow to be valid as such. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, i.e., the region of interest is poorly resolved. This convergence issue can be due to the parameter distributions used as priors or model defects, for example.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however.To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
19.
  •  
20.
  • Helgesson, Petter, 1986- (author)
  • Experimental data and Total Monte Carlo : Towards justified, transparent and complete nuclear data uncertainties
  • 2015
  • Licentiate thesis (other academic/artistic)abstract
    • The applications of nuclear physics are many with one important being nuclear power, which can help decelerating the climate change. In any of these applications, so-called nuclear data (ND, numerical representations of nuclear physics) is used in computations and simulations which are necessary for, e.g., design and maintenance. The ND is not perfectly known - there are uncertainties associated with it - and this thesis concerns the quantification and propagation of these uncertainties. In particular, methods are developed to include experimental data in the Total Monte Carlo methodology (TMC). The work goes in two directions. One is to include the experimental data by giving weights to the different "random files" used in TMC. This methodology is applied to practical cases using an automatic interpretation of an experimental database, including uncertainties and correlations. The weights are shown to give a consistent implementation of Bayes' theorem, such that the obtained uncertainty estimates in theory can be correct, given the experimental data. In the practical implementation, it is more complicated. This is much due to the interpretation of experimental data, but also because of model defects - the methodology assumes that there are parameter choices such that the model of the physics reproduces reality perfectly. This assumption is not valid, and in future work, model defects should be taken into account. Experimental data should also be used to give feedback to the distribution of the parameters, and not only to provide weights at a later stage.The other direction is based on the simulation of the experimental setup as a means to analyze the experiments in a structured way, and to obtain the full joint distribution of several different data points. In practice, this methodology has been applied to the thermal (n,α), (n,p), (n,γ) and (n,tot) cross sections of 59Ni. For example, the estimated expected value and standard deviation for the (n,α) cross section is (12.87 ± 0.72) b, which can be compared to the established value of (12.3 ± 0.6) b given in the work of Mughabghab. Note that also the correlations to the other thermal cross sections as well as other aspects of the distribution are obtained in this work - and this can be important when propagating the uncertainties. The careful evaluation of the thermal cross sections is complemented by a coarse analysis of the cross sections of 59Ni at other energies. The resulting nuclear data is used to study the propagation of the uncertainties through a model describing stainless steel in the spectrum of a thermal reactor. In particular, the helium production is studied. The distribution has a large uncertainty (a standard deviation of (17 ± 3) \%), and it shows a strong asymmetry. Much of the uncertainty and its shape can be attributed to the more coarse part of the uncertainty analysis, which, therefore, shall be refined in the future.
  •  
21.
  • Helgesson, Petter, 1986-, et al. (author)
  • Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example
  • 2017
  • In: Review of Scientific Instruments. - : AIP Publishing. - 0034-6748 .- 1089-7623. ; 88
  • Journal article (peer-reviewed)abstract
    • Fitting parametrized functions to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters, and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data is used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution, and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
  •  
22.
  • Helgesson, Petter, 1986-, et al. (author)
  • Including experimental information in TMC using file weights from automatically generated experimental covariance matrices
  • Other publication (other academic/artistic)abstract
    • The Total Monte Carlo methodology (TMC) for nuclear data (ND) uncertainty propagation has been subject to some critique because the nuclear reaction parameters are sampled from distributions which have not been rigorously determined from experimental data. In this study, it is thoroughly explained how random ND files are weighted with likelihood function values computed by comparing the ND files to experimental data, using experimental covariance matrices generated from information in the experimental database EXFOR and a set of simple rules. A proof that such weights give a consistent implementation of Bayes' theorem is provided. The impact of the weights is mainly studied for a set of integral systems/applications, e.g., a set of shielding fuel assemblies which shall prevent aging of the pressure vessels of the Swedish nuclear reactors Ringhals 3 and 4.For many applications, the weighting does not have much impact, something which can be explained by too narrow prior distributions. Another possible explanation is that the integral systems are highly sensitive to resonance parameters, which effectively are not treated in this work. In other cases, only a very small number of files get significantly large weights, which can be due to the prior parameter distributions or model defects.Further, some parameters used in the rules for the EXFOR interpretation have been varied. The observed impact from varying one parameter at a time is not very strong. This can partially be due to the general insensitivity to the weights seen for many applications, and there can be strong interaction effects. The automatic treatment of outliers has a quite large impact, however. To approach more justified ND uncertainties, the rules for the EXFOR interpretation shall be further discussed and developed, in particular the rules for rejecting outliers, and random ND files that are intended to describe prior distributions shall be generated. Further, model defects need to be treated.
  •  
23.
  • Helgesson, Petter, 1986-, et al. (author)
  • Incorporating Experimental Information in the Total Monte Carlo Methodology Using File Weights
  • 2015
  • In: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 123:SI, s. 214-219
  • Journal article (peer-reviewed)abstract
    • Some criticism has been directed towards the Total Monte Carlo method because experimental information has not been taken into account in a statistically well-founded manner. In this work, a Bayesian calibration method is implemented by assigning weights to the random nuclear data files and the method is illustratively applied to a few applications. In some considered cases, the estimated nuclear data uncertainties are significantly reduced and the central values are significantly shifted. The study suggests that the method can be applied both to estimate uncertainties in a more justified way and in the search for better central values. Some improvements are however necessary; for example, the treatment of outliers and cross-experimental correlations should be more rigorous and random files that are intended to be prior files should be generated.
  •  
24.
  •  
25.
  • Helgesson, Petter, 1986-, et al. (author)
  • New 59Ni data including uncertainties and consequences for gas production in steel in LWR spectraNew 59Ni data including uncertainties and consequences for gas production in steel in LWR spectra
  • 2015
  • Conference paper (other academic/artistic)abstract
    • Abstract: With ageing reactor fleets, the importance of estimating material damage parameters in structural materials is increasing. 59Ni is not naturally abundant, but as noted in, e.g., Ref. [1], the two-step reaction 58Ni(n,γ)59Ni(n,α)56Fe gives a very important contribution to the helium production and damage energy in stainless steel in thermal spectra, because of the extraordinarily large thermal (n,α) cross section for 59Ni (for most other nuclides, the (n,α) reaction has a threshold). None of the evaluated data libraries contain uncertainty information for (n,α) and (n,p) for 59Ni for thermal energies and the resonance region. Therefore, new such data is produced in this work, including random data to be used with the Total Monte Carlo methodology [2] for nuclear data uncertainty propagation.                  The limited R-matrix format (“LRF = 7”) of ENDF-6 is used, with the Reich-Moore approximation (“LRF = 3” is just a subset of Reich-Moore). The neutron and gamma widths are obtained from TARES [2], with uncertainties, and are translated into LRF = 7. The α and proton widths are obtained from the little information available in EXFOR [3] (assuming large uncertainties because of lacking documentation) or from sampling from unresolved resonance parameters from TALYS [2], and they are split into different channels (different excited states of the recoiling nuclide, etc.). Finally, the cross sections are adjusted to match the experiments at thermal energies, with uncertainties.                  The data is used to estimate the gas production rates for different systems, including the propagated nuclear data uncertainty. Preliminary results for SS304 in a typical thermal spectrum, show that including 59Ni at its peak concentration increases the helium production rate by a factor of 4.93 ± 0.28 including a 5.7 ± 0.2 % uncertainty due to the 59Ni data. It is however likely that the uncertainty will increase substantially from including the uncertainty of other nuclides and from re-evaluating the experimental thermal cross sections.
  •  
26.
  •  
27.
  •  
28.
  • Helgesson, Petter, 1986-, et al. (author)
  • Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
  • 2016
  • In: Nuclear Instruments and Methods in Physics Research Section A. - : Elsevier. - 0168-9002 .- 1872-9576. ; 807, s. 137-149
  • Journal article (peer-reviewed)abstract
    • In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also be used in cases where the experimental uncertainties are not Gaussian, and for other purposes than to compute the likelihood, e.g., to produce random experimental data sets for a more direct use in ND evaluation.
  •  
29.
  •  
30.
  • Helgesson, Petter, 1986-, et al. (author)
  • Towards Transparent, Reproducible And Justified Nuclear Data Uncertainty Propagation For Lwr Applications
  • 2015
  • Conference paper (other academic/artistic)abstract
    • Any calculated quantity is practically meaningless without estimates on the uncertainty of theobtained results, not the least when it comes to, e.g., safety parameters in a nuclear reactor. Oneof the sources of uncertainty in reactor physics computations or simulations are the uncertaintiesof the so called nuclear data, i.e., cross sections, angular distributions, fission yields, etc. Thecurrently dominating method for propagating nuclear data uncertainties (using covariance dataand sensitivity analysis) suffers from several limitations, not the least in how the the covariancedata is produced – the production relies to a large extent on personal judgment of nuclear dataevaluators, leading to results which are difficult to reproduce from fundamental principles.Further, such a method assumes linearity, it in practice limits both input and output to bemodeled as Gaussian distributions, and the covariance data in the established nuclear datalibraries is incomplete.“Total Monte Carlo” (TMC) is a nuclear data uncertainty propagation method based on randomsampling of nuclear reaction model parameters which aims to resolve these issues. The methodhas been applied to various applications, ranging from pin cells and criticality safety benchmarksto full core neutronics as well as models including thermo-hydraulics and transients. However,TMC has been subject to some critique since the distributions of the nuclear model parameters,and hence of the nuclear data, has not been deduced from really rigorous statistical theory. Thispresentation briefly discusses the ongoing work on how to use experimental data to approachjustified results from TMC, including the effects of correlations between experimental datapoints and the assessment of such correlations. In this study, the random nuclear data libraries areprovided with likelihood weights based on their agreement to the experimental data, as a meansto implement Bayes' theorem.Further, it is presented how TMC is applied to an MCNP-6 model of shielding fuel assemblies(SFA) at Ringhals 3 and 4. Since the damage from the fast neutron flux may limit the lifetimes ofthese reactors, parts of the fuel adjacent to the pressure vessel is replaced by steel (the SFA) toprotect the vessel, in particular the four points along the belt-line weld which have been exposedto the largest fluence over time. The 56Fe data uncertainties are considered, and the estimatedrelative uncertainty at a quarter of the pressure vessel is viewed in Figure 1 (right) as well as theflux pattern itself (left). The uncertainty in the flux reduction at a selected sensitive point is 2.5± 0.2 % (one standard deviation). Applying the likelihood weights does not have muchimpact for this case, which could indicate that the prior distribution for the 56Fe data is too“narrow” (the used libraries are not really intended to describe a prior distribution), and that thetrue uncertainty is substantially greater. Another explanation could be that the dominating sourceof uncertainty is the high-energy resonances which are treated inefficiently by such weights.In either case, the efforts to approach justified, transparent, reproducible and highly automatizednuclear data uncertainties shall continue. On top of using libraries that are intended to describeprior distributions and treating the resonance region appropriately, the experimental correlationsshould be better motivated and the treatment of outliers shall be improved. Finally, it is probablynecessary to use experimental data in a more direct sense where a lot of experimental data isavailable, since the nuclear models are imperfect.Figure 1. The high energy neutron flux at the reactor pressure vessel in the SFA model, and thecorresponding propagated 56Fe data uncertainty.
  •  
31.
  •  
32.
  • Helgesson, Petter, 1986-, et al. (author)
  • Treating model defects by fitting smoothly varying model parameters : Energy dependence in nuclear data evaluation
  • 2018
  • In: Annals of Nuclear Energy. - : Elsevier BV. - 0306-4549 .- 1873-2100. ; 120, s. 35-47
  • Journal article (peer-reviewed)abstract
    • The fitting of models to data is essential in nuclear data evaluation, as in many other fields of science. The models maybe necessary for interpolation or extrapolation, but they are seldom perfect; there are model defects present which can result in severe biases and underestimated uncertainties. This work presents and investigates the idea to treat this problem by letting the model parameters vary smoothly with an input parameter. To be specific, the model parameters for neutron cross sections are allowed to vary with neutron energy. The parameter variation is limited by Gaussian processes, but the method should not be confused with adding a Gaussian process to the model. The performance of the method is studied using a large number of synthetic data sets, such that it is possible to quantitatively study the distribution of results compared to the underlying truth. There are imperfections in the results, but the method is seen to readily outperform fits without the energy dependent parameters. In particular, the estimates of uncertainty and correlations are much better. Hence, the method seems to offer a promising route for future treatment of model defects, both for nuclear data and elsewhere.
  •  
33.
  • Helgesson, Petter, 1986-, et al. (author)
  • Treating model defects with a Gaussian Process prior for the parameters
  • 2017
  • Conference paper (other academic/artistic)abstract
    • The covariance information in TENDL is obtained by propagating uncertainties of, e.g., TALYSparameters to the observables, and by attempting to match the parameter uncertainties to the experimental data. This results in model-driven covariances with strong energy‐energy correlations, which can lead to erroneously estimated uncertainties for both differential and integral observables.Further, the model driven approach is sensitive to model defects, which can introduce biases and underestimated uncertainties.To resolve the issue of model defects in nuclear data (ND) evaluation, models the defect with a Gaussian process. This can reduce biases and give more correct covariances, including weakerenergy‐energy correlations. In the presented work, we continue the development of using Gaussian processes to treat model defects in ND evaluation, within a TENDL framework. The Gaussian processes are combined with the Levenberg‐Marquardt algorithm for non‐linear fitting, which reduces the need for a prior distribution. Further, it facilitates the transfer of knowledge to other nuclides by working in the parameter domain. First, synthetic data is used to validate the quality of both mean values and covariances provided by the method. After this, we fit TALYS parameters and a model defect correction to the 56Fe data in EXFOR.
  •  
34.
  • Helgesson, Petter, 1986-, et al. (author)
  • Uncertainty driven nuclear data evaluation including thermal (n,alpha) applied to Ni-59
  • 2017
  • In: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 145, s. 1-24
  • Journal article (peer-reviewed)abstract
    • This paper presents a novel approach to the evaluation of nuclear data (ND), combining experimental data for thermalcross sections with resonance parameters and nuclear reaction modeling. The method involves sampling of variousuncertain parameters, in particular uncertain components in experimental setups, and provides extensive covarianceinformation, including consistent cross-channel correlations over the whole energy spectrum. The method is developed for, and applied to, Ni-59, but may be used as a whole, or in part, for other nuclides. Ni-59 is particularly interesting since a substantial amount of Ni-59 is produced in thermal nuclear reactors by neutron capture in Ni-58 and since it has a non-threshold (n,α) cross section. Therefore, Ni-59 gives a very important contribution to the helium production in stainless steel in a thermal reactor. However, current evaluated ND libraries contain old information for Ni-59, without any uncertainty information. The work includes a study of thermal cross section experiments and a novel combination of this experimental information, giving the full multivariate distribution of the thermal cross sections. In particular, the thermal (n,α) cross section is found to be (12.7 ± .7) b. This is consistent with, but yet different from, current established values. Further, the distribution of thermal cross sections is combined with reported resonance parameters, and with TENDL-2015 data, to provide full random ENDF files; all this is done in a novel way, keeping uncertainties and correlations in mind. The random files are also condensed into one single ENDF file with covariance information, which is now part ofa beta version of JEFF 3.3.Finally, the random ENDF files have been processed and used in an MCNP model to study the helium productionin stainless steel. The increase in the (n,α) rate due to Ni-59 compared to fresh stainless steel is found to be a factor of 5.2 at a certain time in the reactor vessel, with a relative uncertainty due to the Ni-59 data of 5.4 %.
  •  
35.
  • Helgesson, Petter, 1986-, et al. (author)
  • UO-2 Versus MOX: Propagated Nuclear Data Uncertainty for k-eff, with Burnup
  • 2014
  • In: Nuclear science and engineering. - 0029-5639 .- 1943-748X. ; 177:3, s. 321-336
  • Journal article (peer-reviewed)abstract
    • Precise assessment of propagated nuclear data uncertainties in integral reactor quantities is necessary for the development of new reactors as well as for modified use, e.g. when replacing UO-2 fuel by MOX fuel in conventional thermal reactors.This paper compares UO-2 fuel to two types of MOX fuel with respect to propagated nuclear data uncertainty, primarily in k-eff, by applying the Fast Total Monte Carlo method (Fast TMC) to a typical PWR pin cell model in Serpent, including burnup. An extensive amount of nuclear data is taken into account, including transport and activation data for 105 isotopes, fission yields for 13 actinides and thermal scattering data for H in H2O.There is indeed a significant difference in propagated nuclear data uncertainty in k-eff; at 0 burnup the uncertainty is 0.6 % for UO-2 and about 1 % for the MOX fuels. The difference decreases with burnup. Uncertainties in fissile fuel isotopes and thermal scattering are the most important for the difference and the reasons for this are understood and explained.This work thus suggests that there can be an important difference between UO-2 and MOX for the determination of uncertainty margins. However, the effects of the simplified model are difficult to overview; uncertainties should be propagated in more complicated models of any considered system. Fast TMC however allows for this without adding much computational time.
  •  
36.
  •  
37.
  •  
38.
  • Hernandez-Solis, Augusto, 1980-, et al. (author)
  • Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's
  • 2017
  • In: ND 2016. - Les Ulis : EDP Sciences. - 9782759890200
  • Conference paper (peer-reviewed)abstract
    • The novel design of the renewable boiling water reactor (RBWR) allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC) method is used to propagate the different neutron-reactions (as well as angular distributions) covariances that are part of the TENDL-2014 nuclear data (ND) library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.
  •  
39.
  • Okenwa-Emegwa, Leah, 1973-, et al. (author)
  • Prevalence and predictors of low future expectations among Syrian refugees resettled in Sweden
  • 2019
  • In: Heliyon. - : Elsevier. - 2405-8440. ; 5:10
  • Journal article (peer-reviewed)abstract
    • BackgroundFuture Expectation is important for motivation and wellbeing, however drastic life events such as in refugee situations may result in low expectations. This study aims to investigate the prevalence and determinants of low future expectations among Syrian refugees resettled in Sweden.MethodsA random sample of 1215 Syrian refugees resettled in Sweden responded to questionnaire. Weighted analyses and adjusted relative risks were conducted to determine the prevalences and predictors of low future expectations. Synergy index was calculated for low social support and depression in relation to low expectations.ResultsThe prevalences of low future expectations for labour market, social and economic intergration were 10.9%, 13.4% and 14.1% respectively. Longer stay in Sweden, being older, low social support and depression were associated with low future expectations. The simultaneous presence of depression and low social support had a synergistic effect on low social expectation.DiscussionsUnderstanding and addressing factors related to low future expectations among refugees may be useful for facilitating their labour market, social and economic integration.
  •  
40.
  • Plompen, A. J. M., et al. (author)
  • The joint evaluated fission and fusion nuclear data library, JEFF-3.3
  • 2020
  • In: European Physical Journal A. - : Springer Science and Business Media LLC. - 1434-6001 .- 1434-601X. ; 56:7
  • Research review (peer-reviewed)abstract
    • The joint evaluated fission and fusion nuclear data library 3.3 is described. New evaluations for neutron-induced interactions with the major actinides 235U, 238U and 239Pu, on 241Am and 23Na, 59Ni, Cr, Cu, Zr, Cd, Hf, W, Au, Pb and Bi are presented. It includes new fission yields, prompt fission neutron spectra and average number of neutrons per fission. In addition, new data for radioactive decay, thermal neutron scattering, gamma-ray emission, neutron activation, delayed neutrons and displacement damage are presented. JEFF-3.3 was complemented by files from the TENDL project. The libraries for photon, proton, deuteron, triton, helion and alpha-particle induced reactions are from TENDL-2017. The demands for uncertainty quantification in modeling led to many new covariance data for the evaluations. A comparison between results from model calculations using the JEFF-3.3 library and those from benchmark experiments for criticality, delayed neutron yields, shielding and decay heat, reveals that JEFF-3.3 performes very well for a wide range of nuclear technology applications, in particular nuclear energy.
  •  
41.
  • Pomp, Stephan, et al. (author)
  • Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method
  • 2015
  • In: Nuclear Data Sheets. - : Elsevier BV. - 0090-3752 .- 1095-9904. ; 123:SI, s. 220-224
  • Journal article (peer-reviewed)abstract
    • We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for U-234(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.
  •  
42.
  • Rochman, Dimitri, et al. (author)
  • Efficient use of Monte Carlo : Uncertainty Propagation
  • 2014
  • In: Nuclear science and engineering. - 0029-5639 .- 1943-748X. ; 177:3, s. 337-349
  • Journal article (peer-reviewed)abstract
    • A new and faster Total Monte Carlo method for the propagation of nuclear data uncertaintiesin Monte Carlo nuclear simulations is presented (the fast TMC method).It is addressing the main drawback of the original Total Monte Carlo method(TMC), namely the necessary large time multiplication factor compared to a singlecalculation. With this new method, Monte Carlo simulations can now be accompaniedwith uncertainty propagation (other than statistical), with small additionalcalculation time. The fast TMC method is presented and compared with the TMCand fast GRS methods for criticality and shielding benchmarks and burn-up calculations.Finally, to demonstrate the efficiency of the method, uncertainties on localdeposited power in 12.7 millions cells are calculated for a full size reactor core,
  •  
43.
  • Sjostrand, Manne, et al. (author)
  • Conceptions of decision-making capacity in psychiatry : interviews with Swedish psychiatrists
  • 2015
  • In: BMC Medical Ethics. - : Springer Science and Business Media LLC. - 1472-6939. ; 16
  • Journal article (peer-reviewed)abstract
    • Background: Decision-making capacity is a key concept in contemporary healthcare ethics. Previous research has mainly focused on philosophical, conceptual issues or on evaluation of different tools for assessing patients' capacity. The aim of the present study is to investigate how the concept and its normative role are understood in Swedish psychiatric care. Of special interest for present purposes are the relationships between decisional capacity and psychiatric disorders and between health law and practical ethics. Methods: Eight in-depth interviews were conducted with Swedish psychiatrists. The interviews were analysed according to descriptive qualitative content analysis in which categories and sub-categories were distilled from the material. Results: Decision-making capacity was seen as dependent on understanding, insight, evaluation, reasoning, and abilities related to making and communicating a choice. However, also the actual content of the decision was held as relevant. There was an ambivalence regarding the relationship between psychiatric disorders and capacity and a tendency to regard psychiatric patients who made unwise treatment decisions as decisionally incapable. However, in cases relating to patients with somatic illnesses, the assumption was rather that patients who made unwise decisions were imprudent but yet decisionally capable. Conclusions: The respondents' conceptions of decision-making capacity were mainly in line with standard theories. However, the idea that capacity also includes aspects relating to the content of the decision clearly deviates from the standard view. The tendency to regard imprudent choices by psychiatric patients as betokening lack of decision-making capacity differs from the view taken of such choices in somatic care. This difference merits further investigations.
  •  
44.
  • Sjöstrand, Henrik, 1978-, et al. (author)
  • Adjustment of nuclear data libraries using integral benchmarks
  • 2017
  • Conference paper (other academic/artistic)abstract
    • Integral experiments can be used to adjust ND-libraries and consequently the uncertainty response in important applications. . In this work we show how we can use integral experiments in a consistent way to adjust the TENDL library.  A Bayesian method based on assigning weights to the different random files using a maximum likelihood function [1] is used. Emphasis is put on the problems that arise from multiple isotopes being present in a benchmark [2].  The challenges in using multiple integral experiments are also addressed, including the correlation between the different integral experiments.Methods on how to use the Total Monte Carlo method to select benchmarks for reactor application will further be discussed. In particular in respect to the so-called fast correlation coefficient and the fast-TMC method [3]
  •  
45.
  • Sjöstrand, Henrik, 1978-, et al. (author)
  • Choosing Nuclear Data evaluation techniques to obtain complete and motivated covariances
  • 2017
  • Conference paper (other academic/artistic)abstract
    • The quality of evaluated nuclear data and its covariances is affected by the choice of the evaluation algorithm. The evaluator can choose to evaluate in the observable domain or the parameter domain and choose to use a Monte Carlo- or deterministic techniques[1]. The evaluator can also choose to model potential model-defects using, e.g., Gaussian Processes [2].  In this contribution, the performance of different evaluation techniques is investigated by using synthetic data.  Different options for how to model the model-defects are also discussed.In addition, the example of a new Ni-59 is presented where different co-variance driven evaluation techniques are combined to create a final file for JEFF-3.3 [3]. Keywords: Total Monte Carlo, Nuclear data evaluationAMS subject classifications.  62P35; 81V35; 62-07; References[1] P.Helgesson, D.Neudecker, H.Sjöstrand, M.Grosskopf, D.Smith, R.Capote; Assessment of Novel Techniques for Nuclear Data Evaluation, 16th International Symposium of Reactor Dosimetry (ISRD16) (2017)[2] G. Schnabel, Large Scale Bayesian Nuclear Data Evaluation with Consistent Model Defects, Ph.D. thesis, Technishe Universitätt Wien (2015)[3] P.Helgesson, H.Sjöstrand, D.Rochman; Uncertainty driven nuclear data evaluation including thermal (n,alpha): applied to Ni-59; Nuclear Data Sheets 145 (2017) 1–24 
  •  
46.
  •  
47.
  • Sjöstrand, Henrik, 1978-, et al. (author)
  • Efficient use of Monte Carlo : The Fast Correlation Coefficient
  • 2018
  • In: EPJ N - Nuclear Sciences and Technologies. - : EDP Sciences. - 2491-9292. ; 4
  • Journal article (peer-reviewed)abstract
    • Random sampling methods are used for nuclear data (ND) uncertainty propagation, often in combination with the use of Monte Carlo codes (e.g., MCNP). One example is the Total Monte Carlo (TMC) method. The standard way to visualize and interpret ND covariances is by the use of the Pearson correlation coefficient, rho = cov(x, y)/sigma(x) x sigma(y), where x or y can be any parameter dependent on ND. The spread in the output, sigma, has both an ND component, sigma(ND), and a statistical component, sigma(stat). The contribution from sigma(stat) decreases the value of rho, and hence it underestimates the impact of the correlation. One way to address this is to minimize sigma(stat) by using longer simulation run-times. Alternatively, as proposed here, a so-called fast correlation coefficient is used, rho(fast) = cov (x, y)-cov (x(stat), y(stat))/root sigma(2)(x)-sigma(2)(x,stat).root sigma(2)(y)-sigma(2)(y,stat) .In many cases, cov (x(stat), y(stat)) can be assumed to be zero. The paper explores three examples, a synthetic data study, correlations in the NRG High Flux Reactor spectrum, and the correlations between integral criticality experiments. It is concluded that the use of rho underestimates the correlation. The impact of the use of rho(fast) is quantified, and the implication of the results is discussed.
  •  
48.
  •  
49.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 57
Type of publication
journal article (25)
conference paper (22)
other publication (6)
research review (2)
doctoral thesis (1)
licentiate thesis (1)
show more...
show less...
Type of content
peer-reviewed (34)
other academic/artistic (22)
pop. science, debate, etc. (1)
Author/Editor
Helgesson, Petter, 1 ... (35)
Sjöstrand, Henrik, 1 ... (26)
Rochman, Dimitri (19)
Sjöstrand, Henrik (18)
Alhassan, Erwin (17)
Pomp, Stephan (16)
show more...
Helgesson, Petter (11)
Tinghög, Petter (9)
Österlund, Michael (8)
Koning, Arjan (7)
Koning, Arjan J. (6)
Mittendorfer-Rutz, E ... (6)
J. Koning, Arjan (5)
Rochman, D. (4)
Holmes, Emily A. (3)
Mittendorfer-Rutz, E (3)
Alhassan, Erwin, 198 ... (3)
Arjan, J. Koning (3)
Juth, Niklas (2)
Conroy, Sean (2)
Lantz, Mattias (2)
Al-Adili, Ali (2)
Gustavsson, Cecilia (2)
Helgesson, M. (2)
Runeson, B. (2)
Kim, D. H. (1)
Ichou, R. (1)
Leeb, H (1)
Angelone, M (1)
Fischer, U (1)
Pereslavtsev, P (1)
Trkov, A (1)
Amin, R (1)
Algora, A. (1)
Tarrío, Diego (1)
Hambsch, F. -J (1)
Jansson, Kaj (1)
Solders, Andreas (1)
Rakopoulos, Vasileio ... (1)
Mattera, Andrea (1)
Prokofiev, Alexander ... (1)
Solders, Andreas, 19 ... (1)
Pomp, Stephan, 1968- (1)
Dupont, E. (1)
Schillebeeckx, P. (1)
Duan, Junfeng (1)
Dimitri, Rochman (1)
Rochman, Dmitri (1)
Diez, C. (1)
Amin, M. Ridwanul (1)
show less...
University
Uppsala University (52)
Karolinska Institutet (11)
Red Cross University College (9)
Linköping University (2)
University of Borås (2)
University of Gävle (1)
Language
English (57)
Research subject (UKÄ/SCB)
Natural sciences (43)
Medical and Health Sciences (11)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view