SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Ohlsson Mattias) "

Search: WFRF:(Ohlsson Mattias)

  • Result 1-50 of 373
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Blennow, Mattias, et al. (author)
  • Non-standard interactions using the OPERA experiment
  • 2008
  • In: European Physical Journal C. - : Springer-Verlag / Società Italiana di Fisica. - 1434-6044 .- 1434-6052. ; 56:4, s. 529-536
  • Journal article (peer-reviewed)abstract
    • We investigate the implications of non-standard interactions on neutrino oscillations in the OPERA experiment. In particular, we study the non-standard interaction parameter epsilon(mu tau) . We show that the OPERA experiment has a unique opportunity to reduce the allowed region for this parameter compared with other experiments such as the MINOS experiment, mostly due to the higher neutrino energies in the CNGS beam compared to the NuMI beam. We find that OPERA is mainly sensitive to a combination of standard and non-standard parameters and that a resulting anti-resonance effect could suppress the expected number of events. Furthermore, we show that running OPERA for five years each with neutrinos and anti-neutrinos would help in resolving the degeneracy between the standard parameters and epsilon(mu tau) . This scenario is significantly better than the scenario with a simple doubling of the statistics by running with neutrinos for ten years.
  •  
2.
  •  
3.
  • Ekelund, Ulf, et al. (author)
  • The skåne emergency medicine (SEM) cohort
  • 2024
  • In: Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine. - London : BioMed Central (BMC). - 1757-7241. ; 32, s. 1-8
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: In the European Union alone, more than 100 million people present to the emergency department (ED) each year, and this has increased steadily year-on-year by 2-3%. Better patient management decisions have the potential to reduce ED crowding, the number of diagnostic tests, the use of inpatient beds, and healthcare costs.METHODS: We have established the Skåne Emergency Medicine (SEM) cohort for developing clinical decision support systems (CDSS) based on artificial intelligence or machine learning as well as traditional statistical methods. The SEM cohort consists of 325 539 unselected unique patients with 630 275 visits from January 1st, 2017 to December 31st, 2018 at eight EDs in the region Skåne in southern Sweden. Data on sociodemographics, previous diseases and current medication are available for each ED patient visit, as well as their chief complaint, test results, disposition and the outcome in the form of subsequent diagnoses, treatments, healthcare costs and mortality within a follow-up period of at least 30 days, and up to 3 years.DISCUSSION: The SEM cohort provides a platform for CDSS research, and we welcome collaboration. In addition, SEM's large amount of real-world patient data with almost complete short-term follow-up will allow research in epidemiology, patient management, diagnostics, prognostics, ED crowding, resource allocation, and social medicine.
  •  
4.
  • Graff, M., et al. (author)
  • Genome-wide physical activity interactions in adiposity. A meta-analysis of 200,452 adults
  • 2017
  • In: PLoS Genet. - : Public Library of Science (PLoS). - 1553-7404 .- 1553-7390. ; 13:4
  • Journal article (peer-reviewed)abstract
    • Physical activity (PA) may modify the genetic effects that give rise to increased risk of obesity. To identify adiposity loci whose effects are modified by PA, we performed genome-wide interaction meta-analyses of BMI and BMI-adjusted waist circumference and waist-hip ratio from up to 200,452 adults of European (n = 180,423) or other ancestry (n = 20,029). We standardized PA by categorizing it into a dichotomous variable where, on average, 23% of participants were categorized as inactive and 77% as physically active. While we replicate the interaction with PA for the strongest known obesity-risk locus in the FTO gene, of which the effect is attenuated by similar to 30% in physically active individuals compared to inactive individuals, we do not identify additional loci that are sensitive to PA. In additional genome-wide meta-analyses adjusting for PA and interaction with PA, we identify 11 novel adiposity loci, suggesting that accounting for PA or other environmental factors that contribute to variation in adiposity may facilitate gene discovery.
  •  
5.
  •  
6.
  • Hulthén, Lena, 1947, et al. (author)
  • Salt intake in young Swedish men.
  • 2010
  • In: Public health nutrition. - 1475-2727. ; 13:5, s. 601-5
  • Journal article (peer-reviewed)abstract
    • OBJECTIVE: To measure dietary salt intake in a Swedish population. DESIGN: A cross-sectional study with measured 24 h urinary excretion of Na and K. Completeness of urine collection was assessed using p-aminobenzoic acid. The subjects were interviewed on their habitual food intake. SETTING: Sahlgrenska University Hospital, Gothenburg, Sweden. SUBJECTS: Eighty-six young men (age 18-20 years), randomly selected from the population of Gothenburg. Seven men were excluded due to incomplete urine collection. RESULTS: The mean excretion of Na and K over 24 h was 198 and 84 mmol, respectively (corresponding to 11.5 g NaCl and 3.3 g K). The mean 24 h excretion in the highest quartile of Na excretion was 297 mmol Na and 105 mmol K, and in the lowest quartile, 100 mmol Na and 68 mmol K. The mean Na:K ratio was 2.3, and respectively 3.2 and 1.8 in the highest and lowest Na excretion quartiles. Calculated energy intake did not differ between the highest and lowest quartiles of Na excretion, but body weight, BMI and the intake of certain foods known to be Na-rich did. CONCLUSIONS: Salt intake in young men was alarming high and even subjects in the lowest quartile of Na excretion did not meet present recommendations to limit salt intake to 5-6 g/d. At this point we can only speculate what the consequences of the high salt intake may be for CVD and stroke later in life. Regulation of the salt content in processed and fast food and in snacks is advocated, to curtail the salt burden on society imposed by the food industry.
  •  
7.
  • Tragardh, Elin, et al. (author)
  • Referring physicians underestimate the extent of abnormalities in final reports from myocardial perfusion imaging
  • 2012
  • In: EJNMMI Research. - 2191-219X. ; 2:1
  • Journal article (peer-reviewed)abstract
    • Background It is important that referring physicians and other treating clinicians properly understand the final reports from diagnostic tests. The aim of the study was to investigate whether referring physicians interpret a final report for a myocardial perfusion scintigraphy (MPS) test in the same way that the reading nuclear medicine physician intended. Methods After viewing final reports containing only typical clinical verbiage and images, physicians in nuclear medicine and referring physicians (physicians in cardiology, internal medicine, and general practitioners) independently classified 60 MPS tests for the presence versus absence of ischemia/infarction according to objective grades of 1 to 5 (1 = no ischemia/infarction, 2 = probably no ischemia/infarction, 3 = equivocal, 4 = probable ischemia/infarction, and 5 = certain ischemia/infarction). When ischemia and/or infarction were thought to be present in the left ventricle, all physicians were also asked to mark the involved segments based on the 17-segment model. Results There was good diagnostic agreement between physicians in nuclear medicine and referring physicians when assessing the general presence versus absence of both ischemia and infarction (median squared kappa coefficient of 0.92 for both). However, when using the 17- segment model, compared to the physicians in nuclear medicine, 12 of 23 referring physicians underestimated the extent of ischemic area while 6 underestimated and 1 overestimated the extent of infarcted area. Conclusions Whereas referring physicians gain a good understanding of the general presence versus absence of ischemia and infarction from MPS test reports, they often underestimate the extent of any ischemic or infarcted areas. This may have adverse clinical consequences, and thus the language in final reports from MPS tests might be further improved and standardized.
  •  
8.
  • Abele, H., et al. (author)
  • Particle physics at the European Spallation Source
  • 2023
  • In: Physics reports. - : Elsevier. - 0370-1573 .- 1873-6270. ; 1023, s. 1-84
  • Research review (peer-reviewed)abstract
    • Presently under construction in Lund, Sweden, the European Spallation Source (ESS) will be the world’s brightest neutron source. As such, it has the potential for a particle physics program with a unique reach and which is complementary to that available at other facilities. This paper describes proposed particle physics activities for the ESS. These encompass the exploitation of both the neutrons and neutrinos produced at the ESS for high precision (sensitivity) measurements (searches).
  •  
9.
  • Abiri, Najmeh, et al. (author)
  • Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems
  • 2019
  • In: Neurocomputing. - Amsterdam : Elsevier BV. - 0925-2312 .- 1872-8286. ; 365, s. 137-146
  • Journal article (peer-reviewed)abstract
    • Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption.
  •  
10.
  • Abiri, Najmeh, et al. (author)
  • Variational auto-encoders with Student’s t-prior
  • 2019
  • In: ESANN 2019 - Proceedings : The 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning - The 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. - Bruges : ESANN. - 9782875870650
  • Conference paper (peer-reviewed)abstract
    • We propose a new structure for the variational auto-encoders (VAEs) prior, with the weakly informative multivariate Student’s t-distribution. In the proposed model all distribution parameters are trained, thereby allowing for a more robust approximation of the underlying data distribution. We used Fashion-MNIST data in two experiments to compare the proposed VAEs with the standard Gaussian priors. Both experiments showed a better reconstruction of the images with VAEs using Student’s t-prior distribution.
  •  
11.
  • Agarwalla, S.K., et al. (author)
  • EUROnu-WP6 2010 Report
  • 2012
  • Reports (other academic/artistic)abstract
    • This is a summary of the work done by the Working Package 6 (Physics) of the EU project "EUROnu" during the second year of activity of the project.
  •  
12.
  • Akhmedov, Evgeny, et al. (author)
  • Stability and leptogenesis in the left-right symmetric seesaw mechanism
  • 2007
  • In: Journal of High Energy Physics (JHEP). - : Springer Science and Business Media LLC. - 1126-6708 .- 1029-8479. ; 4, s. 022-1-022-25
  • Journal article (peer-reviewed)abstract
    • We analyze the left-right symmetric type I+II seesaw mechanism, where an eight-fold degeneracy among the mass matrices of heavy right-handed neutrinos M-R is known to exist. Using the stability property of the solutions and their ability to lead to successful baryogenesis via leptogenesis as additional criteria, we discriminate among these eight solutions and partially lift their eight-fold degeneracy. In particular, we find that viable leptogenesis is generically possible for four out of the eight solutions.
  •  
13.
  • Alabdallah, Abdallah, 1979-, et al. (author)
  • Discovering Premature Replacements in Predictive Maintenance Time-to-Event Data
  • 2023
  • In: Proceedings of the Asia Pacific Conference of the PHM Society 2023. - New York : The Prognostics and Health Management Society.
  • Conference paper (peer-reviewed)abstract
    • Time-To-Event (TTE) modeling using survival analysis in industrial settings faces the challenge of premature replacements of machine components, which leads to bias and errors in survival prediction. Typically, TTE survival data contains information about components and if they had failed or not up to a certain time. For failed components, the time is noted, and a failure is referred to as an event. A component that has not failed is denoted as censored. In industrial settings, in contrast to medical settings, there can be considerable uncertainty in an event; a component can be replaced before it fails to prevent operation stops or because maintenance staff believe that the component is faulty. This shows up as “no fault found” in warranty studies, where a significant proportion of replaced components may appear fault-free when tested or inspected after replacement.In this work, we propose an expectation-maximization-like method for discovering such premature replacements in survival data. The method is a two-phase iterative algorithm employing a genetic algorithm in the maximization phase to learn better event assignments on a validation set. The learned labels through iterations are accumulated and averaged to be used to initialize the following expectation phase. The assumption is that the more often the event is selected, the more likely it is to be an actual failure and not a “no fault found”.Experiments on synthesized and simulated data show that the proposed method can correctly detect a significant percentage of premature replacement cases.
  •  
14.
  • Alabdallah, Abdallah, 1979- (author)
  • Machine Learning Survival Models : Performance and Explainability
  • 2023
  • Licentiate thesis (other academic/artistic)abstract
    • Survival analysis is an essential statistics and machine learning field in various critical applications like medical research and predictive maintenance. In these domains understanding models' predictions is paramount. While machine learning techniques are increasingly applied to enhance the predictive performance of survival models, they simultaneously sacrifice transparency and explainability. Survival models, in contrast to regular machine learning models, predict functions rather than point estimates like regression and classification models. This creates a challenge regarding explaining such models using the known off-the-shelf machine learning explanation techniques, like Shapley Values, Counterfactual examples, and others.   Censoring is also a major issue in survival analysis where the target time variable is not fully observed for all subjects. Moreover, in predictive maintenance settings, recorded events do not always map to actual failures, where some components could be replaced because it is considered faulty or about to fail in the future based on an expert's opinion. Censoring and noisy labels create problems in terms of modeling and evaluation that require to be addressed during the development and evaluation of the survival models.Considering the challenges in survival modeling and the differences from regular machine learning models, this thesis aims to bridge this gap by facilitating the use of machine learning explanation methods to produce plausible and actionable explanations for survival models. It also aims to enhance survival modeling and evaluation revealing a better insight into the differences among the compared survival models.In this thesis, we propose two methods for explaining survival models which rely on discovering survival patterns in the model's predictions that group the studied subjects into significantly different survival groups. Each pattern reflects a specific survival behavior common to all the subjects in their respective group. We utilize these patterns to explain the predictions of the studied model in two ways. In the first, we employ a classification proxy model that can capture the relationship between the descriptive features of subjects and the learned survival patterns. Explaining such a proxy model using Shapley Values provides insights into the feature attribution of belonging to a specific survival pattern. In the second method, we addressed the "what if?" question by generating plausible and actionable counterfactual examples that would change the predicted pattern of the studied subject. Such counterfactual examples provide insights into actionable changes required to enhance the survivability of subjects.We also propose a variational-inference-based generative model for estimating the time-to-event distribution. The model relies on a regression-based loss function with the ability to handle censored cases. It also relies on sampling for estimating the conditional probability of event times. Moreover, we propose a decomposition of the C-index into a weighted harmonic average of two quantities, the concordance among the observed events and the concordance between observed and censored cases. These two quantities, weighted by a factor representing the balance between the two, can reveal differences between survival models previously unseen using only the total Concordance index. This can give insight into the performances of different models and their relation to the characteristics of the studied data.Finally, as part of enhancing survival modeling, we propose an algorithm that can correct erroneous event labels in predictive maintenance time-to-event data. we adopt an expectation-maximization-like approach utilizing a genetic algorithm to find better labels that would maximize the survival model's performance. Over iteration, the algorithm builds confidence about events' assignments which improves the search in the following iterations until convergence.We performed experiments on real and synthetic data showing that our proposed methods enhance the performance in survival modeling and can reveal the underlying factors contributing to the explainability of survival models' behavior and performance.
  •  
15.
  • Alabdallah, Abdallah, 1979-, et al. (author)
  • SurvSHAP : A Proxy-Based Algorithm for Explaining Survival Models with SHAP
  • 2022
  • In: 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA). - Piscataway, NJ : IEEE. - 9781665473309 - 9781665473316
  • Conference paper (peer-reviewed)abstract
    • Survival Analysis models usually output functions (survival or hazard functions) rather than point predictions like regression and classification models. This makes the explanations of such models a challenging task, especially using the Shapley values. We propose SurvSHAP, a new model-agnostic algorithm to explain survival models that predict survival curves. The algorithm is based on discovering patterns in the predicted survival curves, the output of the survival model, that would identify significantly different survival behaviors, and utilizing a proxy model and SHAP method to explain these distinct survival behaviors. Experiments on synthetic and real datasets demonstrate that the SurvSHAP is able to capture the underlying factors of the survival patterns. Moreover, SurvSHAP results on the Cox Proportional Hazard model are compared with the weights of the model to show that we provide faithful overall explanations, with more fine-grained explanations of the sub-populations. We also illustrate the wrong model and explanations learned by a Cox model when applied to heterogeneous sub-populations. We show that a non-linear machine learning survival model with SurvSHAP can better model the data and provide better explanations than linear models.
  •  
16.
  • Alabdallah, Abdallah, 1979-, et al. (author)
  • The Concordance Index decomposition : A measure for a deeper understanding of survival prediction models
  • 2024
  • In: Artificial Intelligence in Medicine. - Amsterdam : Elsevier B.V.. - 0933-3657 .- 1873-2860. ; 148
  • Journal article (peer-reviewed)abstract
    • The Concordance Index (C-index) is a commonly used metric in Survival Analysis for evaluating the performance of a prediction model. In this paper, we propose a decomposition of the C-index into a weighted harmonic mean of two quantities: one for ranking observed events versus other observed events, and the other for ranking observed events versus censored cases. This decomposition enables a finer-grained analysis of the relative strengths and weaknesses between different survival prediction methods. The usefulness of this decomposition is demonstrated through benchmark comparisons against classical models and state-of-the-art methods, together with the new variational generative neural-network-based method (SurVED) proposed in this paper. The performance of the models is assessed using four publicly available datasets with varying levels of censoring. Using the C-index decomposition and synthetic censoring, the analysis shows that deep learning models utilize the observed events more effectively than other models. This allows them to keep a stable C-index in different censoring levels. In contrast to such deep learning methods, classical machine learning models deteriorate when the censoring level decreases due to their inability to improve on ranking the events versus other events. 
  •  
17.
  • Alabdallah, Abdallah, 1979-, et al. (author)
  • Understanding Survival Models through Counterfactual Explanations
  • Other publication (other academic/artistic)abstract
    • The development of black-box survival models has created a need for methods that explain their outputs, just as in the case of traditional machine learning methods. Survival models usually predict functions rather than point estimates. This special nature of their output makes it more difficult to explain their operation. We propose a method to generate plausible counterfactual explanations for survival models. The method supports two options that handle the special nature of survival models' output. One option relies on the Survival Scores, which are based on the area under the survival function, which is more suitable for proportional hazard models. The other one relies on Survival Patterns in the predictions of the survival model, which represent groups that are significantly different from the survival perspective. This guarantees an intuitive well-defined change from one risk group (Survival Pattern) to another and can handle more realistic cases where the proportional hazard assumption does not hold. The method uses a Particle Swarm Optimization algorithm to optimize a loss function to achieve four objectives: the desired change in the target, proximity to the explained example, likelihood, and the actionability of the counterfactual example. Two predictive maintenance datasets and one medical dataset are used to illustrate the results in different settings. The results show that our method produces plausible counterfactuals, which increase the understanding of black-box survival models.
  •  
18.
  •  
19.
  • Altarabichi, Mohammed Ghaith, 1981-, et al. (author)
  • Improving Concordance Index in Regression-based Survival Analysis : Discovery of Loss Function for Neural Networks
  • 2024
  • Other publication (other academic/artistic)abstract
    • In this work, we use an Evolutionary Algorithm (EA) to discover a novel Neural Network (NN) regression-based survival loss function with the aim of improving the C-index performance. Our contribution is threefold; firstly, we propose an evolutionary meta-learning algorithm SAGA$_{loss}$ for optimizing a neural-network regression-based loss function that maximizes the C-index; our algorithm consistently discovers specialized loss functions that outperform MSCE. Secondly, based on our analysis of the evolutionary search results, we highlight a non-intuitive insight that signifies the importance of the non-zero gradient for the censored cases part of the loss function, a property that is shown to be useful in improving concordance. Finally, based on this insight, we propose MSCE$_{Sp}$, a novel survival regression loss function that can be used off-the-shelf and generally performs better than the Mean Squared Error for censored cases. We performed extensive experiments on 19 benchmark datasets to validate our findings.
  •  
20.
  • Amirahmadi, Ali, 1994-, et al. (author)
  • A Masked Language Model for Multi-Source EHR Trajectories Contextual Representation Learning
  • 2023
  • In: Caring is Sharing - Exploiting the Value in Data for Health and Innovation - Proceedings of MIE 2023. - Amsterdam : IOS Press. - 1879-8365 .- 0926-9630. - 9781643683881 ; 302, s. 609-610, s. 609-610
  • Conference paper (peer-reviewed)abstract
    • Using electronic health records data and machine learning to guide future decisions needs to address challenges, including 1) long/short-term dependencies and 2) interactions between diseases and interventions. Bidirectional transformers have effectively addressed the first challenge. Here we tackled the latter challenge by masking one source (e.g., ICD10 codes) and training the transformer to predict it using other sources (e.g., ATC codes).
  •  
21.
  • Amirahmadi, Ali, 1994-, et al. (author)
  • Deep learning prediction models based on EHR trajectories : A systematic review
  • 2023
  • In: Journal of Biomedical Informatics. - Maryland Heights, MO : Academic Press. - 1532-0464 .- 1532-0480. ; 144
  • Research review (peer-reviewed)abstract
    • Background: : Electronic health records (EHRs) are generated at an ever-increasing rate. EHR trajectories, the temporal aspect of health records, facilitate predicting patients’ future health-related risks. It enables healthcare systems to increase the quality of care through early identification and primary prevention. Deep learning techniques have shown great capacity for analyzing complex data and have been successful for prediction tasks using complex EHR trajectories. This systematic review aims to analyze recent studies to identify challenges, knowledge gaps, and ongoing research directions. Methods: For this systematic review, we searched Scopus, PubMed, IEEE Xplore, and ACM databases from Jan 2016 to April 2022 using search terms centered around EHR, deep learning, and trajectories. Then the selected papers were analyzed according to publication characteristics, objectives, and their solutions regarding existing challenges, such as the model's capacity to deal with intricate data dependencies, data insufficiency, and explainability. Results: : After removing duplicates and out-of-scope papers, 63 papers were selected, which showed rapid growth in the number of research in recent years. Predicting all diseases in the next visit and the onset of cardiovascular diseases were the most common targets. Different contextual and non-contextual representation learning methods are employed to retrieve important information from the sequence of EHR trajectories. Recurrent neural networks and the time-aware attention mechanism for modeling long-term dependencies, self-attentions, convolutional neural networks, graphs for representing inner visit relations, and attention scores for explainability were frequently used among the reviewed publications. Conclusions: This systematic review demonstrated how recent breakthroughs in deep learning methods have facilitated the modeling of EHR trajectories. Research on improving the ability of graph neural networks, attention mechanisms, and cross-modal learning to analyze intricate dependencies among EHRs has shown good progress. There is a need to increase the number of publicly available EHR trajectory datasets to allow for easier comparison among different models. Also, very few developed models can handle all aspects of EHR trajectory data. © 2023 The Author(s)
  •  
22.
  • Andersson, Bodil, et al. (author)
  • Prediction of Severe Acute Pancreatitis at Admission to Hospital Using Artificial Neural Networks.
  • 2011
  • In: Pancreatology. - : Elsevier BV. - 1424-3903. ; 11:3, s. 328-335
  • Journal article (peer-reviewed)abstract
    • Background/Aims: Artificial neural networks (ANNs) are non-linear pattern recognition techniques, which can be used as a tool in medical decision-making. The aim of this study was to construct and validate an ANN model for early prediction of the severity of acute pancreatitis (AP). Methods: Patients treated for AP from 2002 to 2005 (n = 139) and from 2007 to 2009 (n = 69) were analyzed to develop and validate the ANN model. Severe AP was defined according to the Atlanta criteria. Results: ANNs selected 6 of 23 potential risk variables as relevant for severity prediction, including duration of pain until arrival at the emergency department, creatinine, hemoglobin, alanine aminotransferase, heart rate, and white blood cell count. The discriminatory power for prediction of progression to a severe course, determined from the area under the receiver-operating characteristic curve, was 0.92 for the ANN model, 0.84 for the logistic regression model (p = 0.030), and 0.63 for the APACHE II score (p < 0.001). The numbers of correctly classified patients for a sensitivity of 50 and 75% were significantly higher for the ANN model than for logistic regression (p = 0.002) and APACHE II (p < 0.001). Conclusion: The ANN model identified 6 risk variables available at the time of admission, including duration of pain, a finding not being presented as a risk factor before. The severity classification developed proved to be superior to APACHE II. and IAP.
  •  
23.
  •  
24.
  • Andersson, Niklas, 1970, et al. (author)
  • A variant near the interleukin-6 gene is associated with fat mass in Caucasian men
  • 2010
  • In: International Journal of Obesity. - : Springer Science and Business Media LLC. - 0307-0565 .- 1476-5497. ; 34:6, s. 1011-9
  • Journal article (peer-reviewed)abstract
    • CONTEXT: Regulation of fat mass appears to be associated with immune functions. Studies of knockout mice show that endogenous interleukin (IL)-6 can suppress mature-onset obesity. OBJECTIVE: To systematically investigate associations of single nucleotide polymorphisms (SNPs) near the IL-6 (IL6) and IL-6 receptor (IL6R) genes with body fat mass, in support for our hypothesis that variants of these genes can be associated with obesity. DESIGN AND STUDY SUBJECTS: The Gothenburg Osteoporosis and Obesity Determinants (GOOD) study is a population-based cross-sectional study of 18- to 20-year-old men (n=1049), from the Gothenburg area (Sweden). Major findings were confirmed in two additional cohorts consisting of elderly men from the Osteoporotic Fractures in Men (MrOS) Sweden (n=2851) and MrOS US (n=5611) multicenter population-based studies. MAIN OUTCOME: The genotype distributions and their association with fat mass in different compartments, measured with dual-energy X-ray absorptiometry. RESULTS: Out of 18 evaluated tag SNPs near the IL6 and IL6R genes, a recently identified SNP rs10242595 G/A (minor allele frequency=29%) 3' of the IL6 gene was negatively associated with the primary outcome total body fat mass (effect size -0.11 standard deviation (s.d.) units per A allele, P=0.02). This negative association with fat mass was also confirmed in the combined MrOS Sweden and MrOS US cohorts (effect size -0.05 s.d. units per A allele, P=0.002). When all three cohorts were combined (n=8927, Caucasian subjects), rs10242595(*)A showed a negative association with total body fat mass (effect size -0.05 s.d. units per A allele, P<0.0002). Furthermore, the rs10242595(*)A was associated with low body mass index (effect size -0.03, P<0.001) and smaller regional fat masses. None of the other SNPs investigated in the GOOD study were reproducibly associated with body fat. CONCLUSIONS: The IL6 gene polymorphism rs10242595(*)A is associated with decreased fat mass in three combined cohorts of 8927 Caucasian men.
  •  
25.
  • Andersson, Niklas, 1970, et al. (author)
  • Variants of the interleukin-1 receptor antagonist gene are associated with fat mass in men
  • 2009
  • In: International Journal of Obesity. - : Springer Science and Business Media LLC. - 0307-0565 .- 1476-5497. ; 33:5, s. 525-533
  • Journal article (peer-reviewed)abstract
    • Context: Immune functions seem to have connections to variations in body fat mass. Studies of knockout mice indicate that endogenous interleukin (IL)-1 can suppress mature-onset obesity. Objective: To systematically investigate our hypotheses that single- nucleotide polymorphisms (SNPs) and/or haplotypes variants in the IL-1 gene system are associated with fat mass. Subjects: The Gothenburg osteoporosis and obesity determinants (GOOD) study is a population-based cross-sectional study of 18-20 year-old men (n = 1068), from Gothenburg, Sweden. Major findings were confirmed in elderly men (n = 3014) from the Swedish part of the osteoporotic fractures in men (MrOS) multicenter population-based study. Main Outcome Measure: The genotype distributions and their association with body fat mass in different compartments, measured with dual-energy X-ray absorptiometry (DXA). Results: Out of 15 investigated SNPs in the IL-1 receptor antagonist (IL1RN) gene, a recently identified 30 untranslated region C4T (rs4252041, minor allele frequency 4%) SNP was associated with the primary outcome total fat mass (P = 0.003) and regional fat masses, but not with lean body mass or serum IL-1 receptor 1 (IL1RN) levels. This SNP was also associated with body fat when correcting the earlier reported IL1RN_2018 T4C (rs419598) SNP (in linkage disequilibrium with a well-studied variable number tandem repeat of 86 bp). The association between rs4252041 SNP and body fat was confirmed in the older MrOS population (P = 0.03). The rs4252041 SNP was part of three haplotypes consisting of five adjacent SNPs that were identified by a sliding window approach. These haplotypes had a highly significant global association with total body fat (P < 0.001). None of the other investigated members of the IL-1 gene family displayed any SNPs that have not been described previously to be significantly associated with body fat. Conclusions: The IL1RN gene, shown to enhance obesity by suppressing IL-1 effects in experimental animals, have no previously described gene polymorphisms and haplotypes that are associated with fat, but not lean mass in two populations of men. International Journal of Obesity (2009) 33, 525-533; doi: 10.1038/ijo.2009.47; published online 17 March 2009
  •  
26.
  •  
27.
  • Ansari, David, et al. (author)
  • Analysis of the Influence of HLA-A Matching Relative to HLA-B and -DR Matching on Heart Transplant Outcomes
  • 2015
  • In: Transplantation direct. - 2373-8731. ; 1:9
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: There are conflicting reports on the effect of donor-recipient HLA matching on outcomes in heart transplantation. The objective of this study was to investigate the effects of HLA-A matching relative to HLA-B and -DR matching on long-term survival in heart transplantation.METHODS: A total of 25 583 patients transplanted between 1988 and 2011 were identified from the International Society for Heart and Lung Transplantation registry. Transplants were divided into 2 donor-recipient matching groups: HLA-A-compatible (no HLA-A mismatches) and HLA-A-incompatible (1-2 HLA-A mismatches). Primary outcome was all-cause mortality. Secondary outcomes were graft failure-, cardiovascular-, infection-, or malignancy-related deaths.RESULTS: The risk of all-cause mortality 15 years after transplantation was higher for HLA-A-compatible (vs HLA-A-incompatible) grafts in patients who had HLA-B-, HLA-DR-, or HLA-B,DR-incompatible grafts (P = 0.027, P = 0.007, and P = 0.002, respectively) but not in HLA-B- and/or HLA-DR-compatible grafts. This was confirmed in multivariable Cox regression analysis where HLA-A compatibility (vs HLA-A incompatibility) was associated with higher mortality in transplants incompatible for HLA-DR or HLA-B and -DR (hazard ratio [HR], 1.59; 95% confidence interval [95% CI], 1.11-2.28; P = 0.012 and HR, 1.69; 95% CI, 1.17-2.43; P = 0.005, respectively). In multivariable analysis, the largest compromise in survival for HLA-A compatibility (vs HLA-incompatibility) was for chronic rejection in HLA-B- and -DR-incompatible grafts (HR, 1.91; 95% CI, 1.22-3.01; P = 0.005).CONCLUSIONS: Decreased long-term survival in heart transplantation was associated with HLA-A compatibility in HLA-B,DR-incompatible grafts.
  •  
28.
  • Ansari, Daniel, et al. (author)
  • CODUSA - Customize Optimal Donor Using Simulated Annealing In Heart Transplantation.
  • 2013
  • In: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 3:May,30
  • Journal article (peer-reviewed)abstract
    • In heart transplantation, selection of an optimal recipient-donor match has been constrained by the lack of individualized prediction models. Here we developed a customized donor-matching model (CODUSA) for patients requiring heart transplantations, by combining simulated annealing and artificial neural networks. Using this approach, by analyzing 59,698 adult heart transplant patients, we found that donor age matching was the variable most strongly associated with long-term survival. Female hearts were given to 21% of the women and 0% of the men, and recipients with blood group B received identical matched blood group in only 18% of best-case match compared with 73% for the original match. By optimizing the donor profile, the survival could be improved with 33 months. These findings strongly suggest that the CODUSA model can improve the ability to select optimal match and avoid worst-case match in the clinical setting. This is an important step towards personalized medicine.
  •  
29.
  • Arvola, Mattias, 1975- (author)
  • Good to use! : Use quality of multi-user applications in the home
  • 2003
  • Licentiate thesis (other academic/artistic)abstract
    • Traditional models of usability are not sufficient for software in the home, since they are built with office software in mind. Previous research suggest that social issues among other things, separate software in homes from software in offices. In order to explore that further, the use qualities to design for, in software for use in face-to-face meetings at home were contrasted to such systems at offices. They were studied using a pluralistic model of use quality with roots in socio-cultural theory, cognitive systems engineering, and architecture. The research approach was interpretative design cases. Observations, situated interviews, and workshops were conducted at a Swedish bank, and three interactive television appliances were designed and studied in simulated home environments. It is concluded that the use qualities to design for in infotainment services on interactive television are laidback interaction, togetherness among users, and entertainment. This is quite different from bank office software that usually is characterised by not only traditional usability criteria such as learnability, flexibility, effectiveness, efficiency, and satisfaction, but also professional face management and ante-use. Ante-use is the events and activities that precedes the actual use that will set the ground for whether the software will have quality in use or not. Furthermore, practices for how to work with use quality values, use quality objectives, and use quality criteria in the interaction design process are suggested. Finally, future research in design of software for several co-present users is proposed.
  •  
30.
  • Arvola, Mattias, 1975- (author)
  • Shades of Use : The Dynamics of Interaction Design for Sociable Use
  • 2005
  • Doctoral thesis (other academic/artistic)abstract
    • Computers are used in sociable situations, for example during customer meetings. This is seldom recognized in design, which means that computers often become a hindrance in the meeting. Based on empirical studies and socio-cultural theory, this thesis provides perspectives on sociable use and identifies appropriate units of analysis that serve as critical tools for understanding and solving interaction design problems. Three sociable situations have been studied: customer meetings, design studios and domestic environments. In total, 49 informants were met with during 41 observation and interview sessions and 17 workshops; in addition, three multimedia platforms were also designed. The empirical results show that people need to perform individual actions while participating in joint action, in a spontaneous fashion and in consideration of each other. The consequence for design is that people must be able to use computers in different manners to control who has what information. Based on the empirical results, five design patterns were developed to guide interaction design for sociable use. The thesis demonstrates that field studies can be used to identify desirable use qualities that in turn can be used as design objectives and forces in design patterns. Re-considering instrumental, communicational, aesthetical, constructional and ethical aspects can furthermore enrich the understanding of identified use qualities. Witha foundation in the field studies, it is argued that the deliberation of ynamic characters and use qualities is an essential component of interaction design. Designers of interaction are required to work on three levels: the user interface, the mediating artefact and the activity of use. It is concluded that doing interaction design is to provide users with perspectives, resources and constraints on their space for actions; the complete design is not finalized until the users engage in action. This is where the fine distinctions and, what I call 'shades of use' appear.
  •  
31.
  • Atabaki-Pasdar, Naeimeh, et al. (author)
  • Inferring causal pathways between metabolic processes and liver fat accumulation: an IMI DIRECT study
  • 2021
  • Other publication (other academic/artistic)abstract
    • Type 2 diabetes (T2D) and non-alcoholic fatty liver disease (NAFLD) often co-occur. Defining causal pathways underlying this relationship may help optimize the prevention and treatment of both diseases. Thus, we assessed the strength and magnitude of the putative causal pathways linking dysglycemia and fatty liver, using a combination of causal inference methods.Measures of glycemia, insulin dynamics, magnetic resonance imaging (MRI)-derived abdominal and liver fat content, serological biomarkers, lifestyle, and anthropometry were obtained in participants from the IMI DIRECT cohorts (n=795 with new onset T2D and 2234 individuals free from diabetes). UK Biobank (n=3641) was used for modelling and replication purposes. Bayesian networks were employed to infer causal pathways, with causal validation using two-sample Mendelian randomization.Bayesian networks fitted to IMI DIRECT data identified higher basal insulin secretion rate (BasalISR) and MRI-derived excess visceral fat (VAT) accumulation as the features of dysmetabolism most likely to cause liver fat accumulation; the unconditional probability of fatty liver (>5%) increased significantly when conditioning on high levels of BasalISR and VAT (by 23%, 32% respectively; 40% for both). Analyses in UK Biobank yielded comparable results. MR confirmed most causal pathways predicted by the Bayesian networks.Here, BasalISR had the highest causal effect on fatty liver predisposition, providing mechanistic evidence underpinning the established association of NAFLD and T2D. BasalISR may represent a pragmatic biomarker for NAFLD prediction in clinical practice.Competing Interest StatementHR is an employee and shareholder of Sanofi. MIM: The views expressed in this article are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health. MIM has served on advisory panels for Pfizer, NovoNordisk and Zoe Global, has received honoraria from Merck, Pfizer, Novo Nordisk and Eli Lilly, and research funding from Abbvie, Astra Zeneca, Boehringer Ingelheim, Eli Lilly, Janssen, Merck, NovoNordisk, Pfizer, Roche, Sanofi Aventis, Servier, and Takeda. As of June 2019, MIM is an employee of Genentech, and a holder of Roche stock. AM is a consultant for Lilly and has received research grants from several diabetes drug companies. PWF has received research grants from numerous diabetes drug companies and fess as consultant from Novo Nordisk, Lilly, and Zoe Global Ltd. He is currently the Scientific Director in Patient Care at the Novo Nordisk Foundation. Other authors declare non competing interests.Funding StatementThe work leading to this publication has received support from the Innovative Medicines Initiative Joint Undertaking under grant agreement 115317 (DIRECT) resources of which are composed of financial contribution from the European Union Seventh Framework Programme (FP7/2007-2013) and EFPIA companies in kind contribution. NAP is supported in part by Henning och Johan Throne-Holsts Foundation, Hans Werthen Foundation, an IRC award from the Swedish Foundation for Strategic Research and a European Research Council award ERC-2015-CoG - 681742_NASCENT. HPM is supported by an IRC award from the Swedish Foundation for Strategic Research and a European Research Council award ERC-2015-CoG - 681742_NASCENT. AGJ is supported by an NIHR Clinician Scientist award (17/0005624). RK is funded by the Novo Nordisk Foundation (NNF18OC0031650) as part of a postdoctoral fellowship, an IRC award from the Swedish Foundation for Strategic Research and a European Research Council award ERC-2015-CoG - 681742_NASCENT. AK, PM, HF, JF and GNG are supported by an IRC award from the Swedish Foundation for Strategic Research and a European Research Council award ERC-2015-CoG - 681742_NASCENT. TJM is funded by an NIHR clinical senior lecturer fellowship. S.Bru acknowledges support from the Novo Nordisk Foundation (grants NNF17OC0027594 and NNF14CC0001). ATH is a Wellcome Trust Senior Investigator and is also supported by the NIHR Exeter Clinical Research Facility. JMS acknowledges support from Science for Life Laboratory (Plasma Profiling Facility), Knut and Alice Wallenberg Foundation (Human Protein Atlas) and Erling-Persson Foundation (KTH Centre for Precision Medicine). MIM is supported by the following grants; Wellcome (090532, 098381, 106130, 203141, 212259); NIH (U01-DK105535). PWF is supported by an IRC award from the Swedish Foundation for Strategic Research and a European Research Council award ERC-2015-CoG - 681742_NASCENT. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesThe details of the IRB/oversight body that provided approval or exemption for the research described are given below:Approval for the study protocol was obtained from each of the regional research ethics review boards separately (Lund, Sweden: 20130312105459927, Copenhagen, Denmark: H-1-2012-166 and H-1-2012-100, Amsterdam, Netherlands: NL40099.029.12, Newcastle, Dundee and Exeter, UK: 12/NE/0132), and all participants provided written informed consent at enrolment. The research conformed to the ethical principles for medical research involving human participants outlined in the Declaration of Helsinki.All necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).YesI have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable.YesAuthors agree to make data and materials supporting the results or analyses presented in their paper available upon reasonable request
  •  
32.
  • Atabaki Pasdar, Naeimeh, et al. (author)
  • Predicting and elucidating the etiology of fatty liver disease: A machine learning modeling and validation study in the IMI DIRECT cohorts
  • 2020
  • In: PLoS Medicine. - San Francisco : Public Library of Science (PLoS). - 1549-1676 .- 1549-1277. ; 17:6, s. 1003149-1003149
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: Non-alcoholic fatty liver disease (NAFLD) is highly prevalent and causes serious health complications in individuals with and without type 2 diabetes (T2D). Early diagnosis of NAFLD is important, as this can help prevent irreversible damage to the liver and, ultimately, hepatocellular carcinomas. We sought to expand etiological understanding and develop a diagnostic tool for NAFLD using machine learning. METHODS AND FINDINGS: We utilized the baseline data from IMI DIRECT, a multicenter prospective cohort study of 3,029 European-ancestry adults recently diagnosed with T2D (n = 795) or at high risk of developing the disease (n = 2,234). Multi-omics (genetic, transcriptomic, proteomic, and metabolomic) and clinical (liver enzymes and other serological biomarkers, anthropometry, measures of beta-cell function, insulin sensitivity, and lifestyle) data comprised the key input variables. The models were trained on MRI-image-derived liver fat content (
  •  
33.
  • Atabaki-Pasdar, Naeimeh, et al. (author)
  • Statistical power considerations in genotype-based recall randomized controlled trials
  • 2016
  • In: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 6
  • Journal article (peer-reviewed)abstract
    • Randomized controlled trials (RCT) are often underpowered for validating gene-treatment interactions. Using published data from the Diabetes Prevention Program (DPP), we examined power in conventional and genotype-based recall (GBR) trials. We calculated sample size and statistical power for gene-metformin interactions (vs. placebo) using incidence rates, gene-drug interaction effect estimates and allele frequencies reported in the DPP for the rs8065082 SLC47A1 variant, a metformin transported encoding locus. We then calculated statistical power for interactions between genetic risk scores (GRS), metformin treatment and intensive lifestyle intervention (ILI) given a range of sampling frames, clinical trial sample sizes, interaction effect estimates, and allele frequencies; outcomes were type 2 diabetes incidence (time-to-event) and change in small LDL particles (continuous outcome). Thereafter, we compared two recruitment frameworks: GBR (participants recruited from the extremes of a GRS distribution) and conventional sampling (participants recruited without explicit emphasis on genetic characteristics). We further examined the influence of outcome measurement error on statistical power. Under most simulated scenarios, GBR trials have substantially higher power to observe gene-drug and gene-lifestyle interactions than same-sized conventional RCTs. GBR trials are becoming popular for validation of gene-treatment interactions; our analyses illustrate the strengths and weaknesses of this design.
  •  
34.
  • Baussan, E., et al. (author)
  • A very intense neutrino super beam experiment for leptonic CP violation discovery based on the European spallation source linac
  • 2014
  • In: Nuclear Physics B. - : Elsevier BV. - 0550-3213 .- 1873-1562. ; 885, s. 127-149
  • Journal article (peer-reviewed)abstract
    • Very intense neutrino beams and large neutrino detectors will be needed in order to enable the discovery of CP violation in the leptonic sector. We propose to use the proton linac of the European Spoliation Source currently under construction in Lund, Sweden, to deliver, in parallel with the spoliation neutron production, a very intense, cost effective and high performance neutrino beam. The baseline program for the European Spoliation Source linac is that it will be fully operational at 5 MW average power by 2022, producing 2 GeV 2.86 ms long proton pulses at a rate of 14 Hz. Our proposal is to upgrade the linac to 10 MW average power and 28 Hz, producing 14 pulses/s for neutron production and 14 pulses/s for neutrino production. Furthermore, because of the high current required in the pulsed neutrino horn, the length of the pulses used for neutrino production needs to be compressed to a few mu s with the aid of an accumulator ring. A long baseline experiment using this Super Beam and a megaton underground Water Cherenkov detector located in existing mines 300-600 km from Lund will make it possible to discover leptonic CP violation at 5 sigma significance level in up to 50% of the leptonic Dirac CP-violating phase range. This experiment could also determine the neutrino mass hierarchy at a significance level of more than 3 sigma if this issue will not already have been settled by other experiments by then. The mass hierarchy performance could be increased by combining the neutrino beam results with those obtained from atmospheric neutrinos detected by the same large volume detector. This detector will also be used to measure the proton lifetime, detect cosmological neutrinos and neutrinos from supernova explosions. Results on the sensitivity to leptonic CP violation and the neutrino mass hierarchy are presented.
  •  
35.
  • Bergenfeldt, Henrik, et al. (author)
  • ABO-Identical Blood Group Matching Has No Survival Benefit for AB Heart Transplant Recipients.
  • 2015
  • In: Annals of Thoracic Surgery. - : Elsevier BV. - 1552-6259 .- 0003-4975. ; 99:3, s. 762-769
  • Journal article (peer-reviewed)abstract
    • Although identical blood group matching is preferred, it is uncertain if this results in improved survival and, if so, how large the survival benefits are. Earlier studies have yielded conflicting results and are mostly based on single-center cohorts with few long-term results. Recipients with blood group AB are of particular interest regarding nonidentical blood group matching because they may receive organs from all blood groups. We wanted to test the hypothesis that ABO-identical matching results in superior survival in recipients with blood group AB.
  •  
36.
  • Berndt, Sonja I., et al. (author)
  • Genome-wide meta-analysis identifies 11 new loci for anthropometric traits and provides insights into genetic architecture
  • 2013
  • In: Nature Genetics. - : Springer Science and Business Media LLC. - 1061-4036 .- 1546-1718. ; 45:5, s. 501-U69
  • Journal article (peer-reviewed)abstract
    • Approaches exploiting trait distribution extremes may be used to identify loci associated with common traits, but it is unknown whether these loci are generalizable to the broader population. In a genome-wide search for loci associated with the upper versus the lower 5th percentiles of body mass index, height and waist-to-hip ratio, as well as clinical classes of obesity, including up to 263,407 individuals of European ancestry, we identified 4 new loci (IGFBP4, H6PD, RSRC1 and PPP2R2A) influencing height detected in the distribution tails and 7 new loci (HNF4G, RPTOR, GNAT2, MRPS33P4, ADCY9, HS6ST3 and ZZZ3) for clinical classes of obesity. Further, we find a large overlap in genetic structure and the distribution of variants between traits based on extremes and the general population and little etiological heterogeneity between obesity subgroups.
  •  
37.
  • Bian, Li, et al. (author)
  • Dichloroacetate alleviates development of collagen II-induced arthritis in female DBA/1 mice
  • 2009
  • In: ARTHRITIS RESEARCH and THERAPY. - : BioMed Central. - 1478-6354 .- 1478-6362. ; 11:5
  • Journal article (peer-reviewed)abstract
    • Introduction Dichloroacetate (DCA) has been in clinical use for the treatment of lactacidosis and inherited mitochondrial disorders. It has potent anti-tumor effects both in vivo and in vitro, facilitating apoptosis and inhibiting proliferation. The proapoptotic and anti-proliferative properties of DCA prompted us to investigate the effects of this compound in arthritis. Methods In the present study, we used DCA to treat murine collagen type II (CII)-induced arthritis (CIA), an experimental model of rheumatoid arthritis. DBA/1 mice were treated with DCA given in drinking water. Results Mice treated with DCA displayed much slower onset of CIA and significantly lower severity (P less than 0.0001) and much lower frequency (36% in DCA group vs. 86% in control group) of arthritis. Also, cartilage and joint destruction was significantly decreased following DCA treatment (P = 0.005). Moreover, DCA prevented arthritis-induced cortical bone mineral loss. This clinical picture was also reflected by lower levels of anti-CII antibodies in DCA-treated versus control mice, indicating that DCA affected the humoral response. In contrast, DCA had no effect on T cell-or granulocyte-mediated responses. The beneficial effect of DCA was present in female DBA/1 mice only. This was due in part to the effect of estrogen, since ovariectomized mice did not benefit from DCA treatment to the same extent as sham-operated controls (day 30, 38.7% of ovarectomized mice had arthritis vs. only 3.4% in sham-operated group). Conclusion Our results indicate that DCA delays the onset and alleviates the progression of CIA in an estrogen-dependent manner.
  •  
38.
  • Björk, Jonas, et al. (author)
  • A simple statistical model for prediction of acute coronary syndrome in chest pain patients in the emergency department
  • 2006
  • In: BMC Medical Informatics and Decision Making. - : Springer Science and Business Media LLC. - 1472-6947. ; 6:28
  • Journal article (peer-reviewed)abstract
    • Background Several models for prediction of acute coronary syndrome (ACS) among chest pain patients in the emergency department (ED) have been presented, but many models predict only the likelihood of acute myocardial infarction, or include a large number of variables, which make them less than optimal for implementation at a busy ED. We report here a simple statistical model for ACS prediction that could be used in routine care at a busy ED. Methods Multivariable analysis and logistic regression were used on data from 634 ED visits for chest pain. Only data immediately available at patient presentation were used. To make ACS prediction stable and the model useful for personnel inexperienced in electrocardiogram (ECG) reading, simple ECG data suitable for computerized reading were included. Results Besides ECG, eight variables were found to be important for ACS prediction, and included in the model: age, chest discomfort at presentation, symptom duration and previous hypertension, angina pectoris, AMI, congestive heart failure or PCI/CABG. At an ACS prevalence of 21% and a set sensitivity of 95%, the negative predictive value of the model was 96%. Conclusions The present prediction model, combined with the clinical judgment of ED personnel, could be useful for the early discharge of chest pain patients in populations with a low prevalence of ACS.
  •  
39.
  • Björk, Jonas, et al. (author)
  • Risk predictions for individual patients from logistic regression were visualized with bar-line charts.
  • 2012
  • In: Journal of Clinical Epidemiology. - : Elsevier BV. - 1878-5921 .- 0895-4356. ; 65, s. 335-342
  • Journal article (peer-reviewed)abstract
    • OBJECTIVE: The interface of a computerized decision support system is crucial for its acceptance among end users. We demonstrate how combined bar-line charts can be used to visualize predictions for individual patients from logistic regression models. STUDY DESIGN AND SETTING: Data from a previous diagnostic study aiming at predicting the immediate risk of acute coronary syndrome (ACS) among 634 patients presenting to an emergency department with chest pain were used. Risk predictions from the logistic regression model were presented for four hypothetical patients in bar-line charts with bars representing empirical Bayes adjusted likelihood ratios (LRs) and the line representing the estimated probability of ACS, sequentially updated from left to right after assessment of each risk factor. RESULTS: Two patients had similar low risk for ACS but quite different risk profiles according to the bar-line charts. Such differences in risk profiles could not be detected from the estimated ACS risk alone. The bar-line charts also highlighted important but counteracted risk factors in cases where the overall LR was less informative (close to one). CONCLUSION: The proposed graphical technique conveys additional information from the logistic model that can be important for correct diagnosis and classification of patients and appropriate medical management.
  •  
40.
  • Björkelund, Anders, et al. (author)
  • Machine learning compared with rule‐in/rule‐out algorithms and logistic regression to predict acute myocardial infarction based on troponin T concentrations
  • 2021
  • In: Journal of the American College of Emergency Physicians Open. - Hoboken, NJ : John Wiley & Sons. - 2688-1152. ; 2:2
  • Journal article (peer-reviewed)abstract
    • AbstractObjectiveComputerized decision-support tools may improve diagnosis of acute myocardial infarction (AMI) among patients presenting with chest pain at the emergency department (ED). The primary aim was to assess the predictive accuracy of machine learning algorithms based on paired high-sensitivity cardiac troponin T (hs-cTnT) concentrations with varying sampling times, age, and sex in order to rule in or out AMI.MethodsIn this register-based, cross-sectional diagnostic study conducted retrospectively based on 5695 chest pain patients at 2 hospitals in Sweden 2013–2014 we used 5-fold cross-validation 200 times in order to compare the performance of an artificial neural network (ANN) with European guideline-recommended 0/1- and 0/3-hour algorithms for hs-cTnT and with logistic regression without interaction terms. Primary outcome was the size of the intermediate risk group where AMI could not be ruled in or out, while holding the sensitivity (rule-out) and specificity (rule-in) constant across models.ResultsANN and logistic regression had similar (95%) areas under the receiver operating characteristics curve. In patients (n = 4171) where the timing requirements (0/1 or 0/3 hour) for the sampling were met, using ANN led to a relative decrease of 9.2% (95% confidence interval 4.4% to 13.8%; from 24.5% to 22.2% of all tested patients) in the size of the intermediate group compared to the recommended algorithms. By contrast, using logistic regression did not substantially decrease the size of the intermediate group.ConclusionMachine learning algorithms allow for flexibility in sampling and have the potential to improve risk assessment among chest pain patients at the ED.
  •  
41.
  • Björkqvist, Maria, et al. (author)
  • Evaluation of a previously suggested plasma biomarker panel to identify Alzheimer's disease.
  • 2012
  • In: PLoS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 7:1
  • Journal article (peer-reviewed)abstract
    • There is an urgent need for biomarkers in plasma to identify Alzheimer's disease (AD). It has previously been shown that a signature of 18 plasma proteins can identify AD during pre-dementia and dementia stages (Ray et al, Nature Medicine, 2007). We quantified the same 18 proteins in plasma from 174 controls, 142 patients with AD, and 88 patients with other dementias. Only three of these proteins (EGF, PDG-BB and MIP-1δ) differed significantly in plasma between controls and AD. The 18 proteins could classify patients with AD from controls with low diagnostic precision (area under the ROC curve was 63%). Moreover, they could not distinguish AD from other dementias. In conclusion, independent validation of results is important in explorative biomarker studies.
  •  
42.
  • Blankenbecler, R, et al. (author)
  • Matching protein structures with fuzzy alignments
  • 2003
  • In: Proceedings of the National Academy of Sciences. - : Proceedings of the National Academy of Sciences. - 1091-6490 .- 0027-8424. ; 100:21, s. 11936-11940
  • Journal article (peer-reviewed)abstract
    • Unraveling functional and ancestral relationships between proteins as well as structure-prediction procedures require powerful protein-alignment methods. A structure-alignment method is presented where the problem is mapped onto a cost function containing both fuzzy (Potts) assignment variables and atomic coordinates. The cost function is minimized by using an iterative scheme, where at each step mean field theory methods at finite "temperatures" are used for determining fuzzy assignment variables followed by exact translation and rotation of atomic coordinates weighted by their corresponding fuzzy assignment variables. The approach performs very well when compared with other methods, requires modest central processing unit consumption, and is robust with respect to choice of iteration parameters for a wide range of proteins.
  •  
43.
  • Blennow, Mattias, 1980-, et al. (author)
  • A combined study of source, detector and matter non-standard neutrino interactions at DUNE
  • 2016
  • In: Journal of High Energy Physics (JHEP). - : Springer. - 1126-6708 .- 1029-8479. ; 2016:8
  • Journal article (peer-reviewed)abstract
    • We simultaneously investigate source, detector and matter non-standard neutrino interactions at the proposed DUNE experiment. Our analysis is performed using a Markov Chain Monte Carlo exploring the full parameter space. We find that the sensitivity of DUNE to the standard oscillation parameters is worsened due to the presence of non-standard neutrino interactions. In particular, there are degenerate solutions in the leptonic mixing angle θ23 and the Dirac CP-violating phase δ. We also compute the expected sensitivities at DUNE to the non-standard interaction parameters. We find that the sensitivities to the matter non-standard interaction parameters are substantially stronger than the current bounds (up to a factor of about 15). Furthermore, we discuss correlations between the source/detector and matter non-standard interaction parameters and find a degenerate solution in θ23. Finally, we explore the effect of statistics on our results.
  •  
44.
  • Blennow, Mattias, et al. (author)
  • Approximative two-flavor framework for neutrino oscillations with nonstandard interactions
  • 2008
  • In: Physical Review D. - : The American Physical Society. - 1550-7998. ; 78:9, s. 093002-1-093002-9
  • Journal article (peer-reviewed)abstract
    • In this paper, we develop approximative two-flavor neutrino oscillation formulas including subleading nonstandard interaction effects. Especially, the limit when the small mass-squared difference approaches zero is investigated. The approximate formulas are also tested against numerical simulations in order to determine their accuracy and they will probably be most useful in the GeV energy region, which is the energy region where most upcoming neutrino oscillation experiments will be operating. Naturally, it is important to have analytical formulas in order to interpret the physics behind the degeneracies between standard and nonstandard parameters.
  •  
45.
  • Blennow, Mattias, et al. (author)
  • Damping signatures in future neutrino oscillation experiments
  • 2005
  • In: Journal of High Energy Physics (JHEP). - : IOP Publishing. - 1126-6708 .- 1029-8479. ; 2005:06, s. 049-
  • Journal article (peer-reviewed)abstract
    • We discuss the phenomenology of damping signatures in the neutrino oscillation probabilities, where either the oscillating terms or the probabilities can be damped. This approach is a possibility for tests of damping effects in future neutrino oscillation experiments, where we mainly focus on reactor and long-baseline experiments. We extensively motivate different damping signatures due to small corrections by neutrino decoherence, neutrino decay, oscillations into sterile neutrinos, or other mechanisms, and classify these signatures according to their energy ( spectral) dependencies. We demonstrate, at the example of short baseline reactor experiments, that damping can severely alter the interpretation of results, e. g., it could fake a value of sin(2)(2 theta(13)) smaller than the one provided by Nature. In addition, we demonstrate how a neutrino factory could constrain different damping models with emphasis on how these different models could be distinguished, i.e., how easily the actual type of effect could be identified. We find that the damping models cluster in different categories, which can be much better distinguished from each other than models within the same cluster.
  •  
46.
  • Blennow, Mattias, et al. (author)
  • Day-night effect in solar neutrino oscillations with three flavors
  • 2004
  • In: Physical Review D. Particles and fields. - : The American Physical Society. - 0556-2821 .- 1089-4918. ; 69:7, s. 073006-1-073006-9
  • Journal article (peer-reviewed)abstract
    • We investigate the effects of a nonzero leptonic mixing angle theta(13) on the solar neutrino day-night asymmetry. Using a constant matter density profile for the Earth and well-motivated approximations, we derive analytical expressions for the nu(e) survival probabilities for solar neutrinos arriving directly at the detector and for solar neutrinos which have passed through the Earth. Furthermore, we numerically study the effects of a nonzero theta(13) on the day-night asymmetry at detectors and find that they are small. Finally, we show that if the uncertainties in the parameters theta(12) and Deltam(2) as well as the uncertainty in the day-night asymmetry itself were much smaller than they are today, this effect could, in principle, be used to determine theta(13).
  •  
47.
  • Blennow, Mattias, et al. (author)
  • Effective neutrino mixing and oscillations in dense matter
  • 2005
  • In: Physics Letters B. - : Elsevier B.V.. - 0370-2693 .- 1873-2445. ; 609:3-4, s. 330-338
  • Journal article (peer-reviewed)abstract
    • We investigate the effective case of two-flavor neutrino oscillations in infinitely dense matter by using a perturbative approach. We begin by briefly summarizing the conditions for the three-flavor neutrino oscillation probabilities to take on the same form as the corresponding two-flavor probabilities. Then, we proceed with the infinitely dense matter calculations. Finally, we study the validity of the approximation of infinitely dense matter when the effective matter potential is large, but not infinite, this is done by using both analytic and numeric methods.
  •  
48.
  • Blennow, Mattias, et al. (author)
  • Effects of non-standard interactions in the MINOS experiment
  • 2008
  • In: Physics Letters B. - : Elsevier B.V.. - 0370-2693 .- 1873-2445. ; 660:5, s. 522-528
  • Journal article (peer-reviewed)abstract
    • We investigate the effects of non-standard interactions on the determination of the neutrino oscillation parameters Delta m(31)(2), theta(23), and theta(13) in the MINOS experiment. We show that adding non-standard interactions to the analysis lead to an extension of the allowed parameter space to larger values of Delta m(31)(2) and smaller theta(23), and basically removes all predictability for theta(13). In addition, we discuss the sensitivities to the non-standard interaction parameters of the MINOS experiment alone. In particular, we examine the degeneracy between theta(13) and the non-standard interaction parameter epsilon(e tau). We find that this degeneracy is responsible for the removal of the theta(13) predictability and that the possible bound on vertical bar epsilon(e tau)vertical bar is competitive with direct bounds only if a more stringent external bound on theta(13) is applied.
  •  
49.
  • Blennow, Mattias, et al. (author)
  • Exact series solution to the two flavor neutrino oscillation problem in matter
  • 2004
  • In: Journal of Mathematical Physics. - : American Institute of Physics. - 0022-2488 .- 1089-7658. ; 45:11, s. 4053-4063
  • Journal article (peer-reviewed)abstract
    • In this paper, we present a real nonlinear differential equation for the two flavor neutrino oscillation problem in matter with an arbitrary density profile. We also present an exact series solution to this nonlinear differential equation. In addition, we investigate numerically the convergence of this solution for different matter density profiles such as constant and linear profiles as well as the Preliminary Reference Earth Model describing the Earth's matter density profile. Finally, we discuss other methods used for solving the neutrino flavor evolution problem.
  •  
50.
  • Blennow, Mattias, et al. (author)
  • Exploring source and detector non-standard neutrino interactions at ESS nu SB
  • 2015
  • In: Journal of High Energy Physics (JHEP). - 1126-6708 .- 1029-8479. ; :9
  • Journal article (peer-reviewed)abstract
    • We investigate source and detector non-standard neutrino interactions at the proposed ESS nu SB experiment. We analyze the effect of non-standard physics at the probability level, the event-rate level and by a full computation of the ESS nu SB setup. We find that the precision measurement of the leptonic mixing angle theta(23) at ESS nu SB is robust in the presence of non-standard interactions, whereas that of the leptonic CP-violating phase delta is worsened at most by a factor of two. We compute sensitivities to all the relevant source and decector non-standard interaction parameters and find that the sensitivities to the parameters epsilon(s)(mu e) and epsilon(d)(mu e) are comparable to the existing limits in a realistic scenario, while they improve by a factor of two in an optimistic scenario. Finally, we show that the absence of a near detector compromises the sensitivity of ESS nu SB to non-standard interactions.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 373
Type of publication
journal article (309)
conference paper (38)
other publication (6)
research review (6)
licentiate thesis (6)
reports (2)
show more...
doctoral thesis (2)
book chapter (2)
editorial proceedings (1)
patent (1)
show less...
Type of content
peer-reviewed (338)
other academic/artistic (35)
Author/Editor
Ohlsson, Claes, 1965 (168)
Lorentzon, Mattias, ... (166)
Mellström, Dan, 1945 (120)
Ohlsson, Mattias (114)
Karlsson, Magnus (59)
Vandenput, Liesbeth, ... (53)
show more...
Edenbrandt, Lars (44)
Ljunggren, Östen (39)
Eriksson, Joel (29)
Blennow, Mattias (23)
Nethander, Maria, 19 ... (22)
Karlsson, Magnus K. (22)
Ekelund, Ulf (22)
Björk, Jonas (21)
Ohlsson, Mattias, 19 ... (20)
Lind, Lars (19)
Ohlsson, Tommy, 1973 ... (19)
Peterson, Carsten (17)
Jansson, John-Olov, ... (17)
Johansson, Helena, 1 ... (16)
Rivadeneira, F (15)
Gudnason, V (13)
Kutalik, Z. (13)
Lehtimaki, T. (13)
Lind, L (13)
Peters, A (12)
Langenberg, C. (12)
Hofman, A (12)
Wareham, Nicholas J. (12)
Ingelsson, Erik (12)
Campbell, H (12)
Höglund, Peter (12)
Mangino, Massimo (12)
Lorentzon, Mattias (12)
Luan, J. (11)
Tanaka, T. (11)
Groop, Leif (11)
Teumer, A (11)
Ferrucci, L (11)
Volzke, H (11)
Rosengren, Björn (11)
Vollenweider, P. (11)
Snieder, H. (11)
van Duijn, Cornelia ... (11)
Lehtimäki, Terho (11)
Stefansson, Kari (11)
Laakso, M. (11)
Gieger, C (11)
Salomaa, V (11)
Boehnke, M (11)
show less...
University
Lund University (245)
University of Gothenburg (179)
Uppsala University (84)
Royal Institute of Technology (36)
Karolinska Institutet (34)
Umeå University (30)
show more...
Halmstad University (30)
Chalmers University of Technology (26)
Linköping University (6)
Stockholm University (4)
Luleå University of Technology (3)
Mid Sweden University (2)
Högskolan Dalarna (2)
Linnaeus University (1)
University of Borås (1)
RISE (1)
show less...
Language
English (371)
Swedish (2)
Research subject (UKÄ/SCB)
Medical and Health Sciences (265)
Natural sciences (90)
Engineering and Technology (21)
Humanities (2)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view