SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Schön Thomas B. Professor 1977 ) "

Sökning: WFRF:(Schön Thomas B. Professor 1977 )

  • Resultat 1-10 av 98
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Gustafsson, Stefan, et al. (författare)
  • Development and validation of deep learning ECG-based prediction of myocardial infarction in emergency department patients
  • 2022
  • Ingår i: Scientific Reports. - : Springer Nature. - 2045-2322. ; 12
  • Tidskriftsartikel (refereegranskat)abstract
    • Myocardial infarction diagnosis is a common challenge in the emergency department. In managed settings, deep learning-based models and especially convolutional deep models have shown promise in electrocardiogram (ECG) classification, but there is a lack of high-performing models for the diagnosis of myocardial infarction in real-world scenarios. We aimed to train and validate a deep learning model using ECGs to predict myocardial infarction in real-world emergency department patients. We studied emergency department patients in the Stockholm region between 2007 and 2016 that had an ECG obtained because of their presenting complaint. We developed a deep neural network based on convolutional layers similar to a residual network. Inputs to the model were ECG tracing, age, and sex; and outputs were the probabilities of three mutually exclusive classes: non-ST-elevation myocardial infarction (NSTEMI), ST-elevation myocardial infarction (STEMI), and control status, as registered in the SWEDEHEART and other registries. We used an ensemble of five models. Among 492,226 ECGs in 214,250 patients, 5,416 were recorded with an NSTEMI, 1,818 a STEMI, and 485,207 without a myocardial infarction. In a random test set, our model could discriminate STEMIs/NSTEMIs from controls with a C-statistic of 0.991/0.832 and had a Brier score of 0.001/0.008. The model obtained a similar performance in a temporally separated test set of the study sample, and achieved a C-statistic of 0.985 and a Brier score of 0.002 in discriminating STEMIs from controls in an external test set. We developed and validated a deep learning model with excellent performance in discriminating between control, STEMI, and NSTEMI on the presenting ECG of a real-world sample of the important population of all-comers to the emergency department. Hence, deep learning models for ECG decision support could be valuable in the emergency department.
  •  
2.
  •  
3.
  • Andersson, Carl (författare)
  • Deep probabilistic models for sequential and hierarchical data
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Consider the problem where we want a computer program capable of recognizing a pedestrian on the road. This could be employed in a car to automatically apply the brakes to avoid an accident. Writing such a program is immensely difficult but what if we could instead use examples and let the program learn what characterizes a pedestrian from the examples. Machine learning can be described as the process of teaching a model (computer program) to predict something (the presence of a pedestrian) with help of data (examples) instead of through explicit programming.This thesis focuses on a specific method in machine learning, called deep learning. This method can arguably be seen as sole responsible for the recent upswing of machine learning in academia as well as in society at large. However, deep learning requires, in human standards, a huge amount of data to perform well which can be a limiting factor.  In this thesis we describe different approaches to reduce the amount of data that is needed by encoding some of our prior knowledge about the problem into the model. To this end we focus on sequential and hierarchical data, such as speech and written language.Representing sequential output is in general difficult due to the complexity of the output space. Here, we make use of a probabilistic approach focusing on sequential models in combination with a deep learning structure called the variational autoencoder. This is applied to a range of different problem settings, from system identification to speech modeling.The results come in three parts. The first contribution focus on applications of deep learning to typical system identification problems, the intersection between the two areas and how they can benefit from each other. The second contribution is on hierarchical data where we promote a multiscale variational autoencoder inspired by image modeling. The final contribution is on verification of probabilistic models, in particular how to evaluate the validity of a probabilistic output, also known as calibration.
  •  
4.
  • Gustafsson, Fredrik K., 1993- (författare)
  • Towards Accurate and Reliable Deep Regression Models
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Regression is a fundamental machine learning task with many important applications within computer vision and other domains. In general, it entails predicting continuous targets from given inputs. Deep learning has become the dominant paradigm within machine learning in recent years, and a wide variety of different techniques have been employed to solve regression problems using deep models. There is however no broad consensus on how deep regression models should be constructed for best possible accuracy, or how the uncertainty in their predictions should be represented and estimated. These open questions are studied in this thesis, aiming to help take steps towards an ultimate goal of developing deep regression models which are both accurate and reliable enough for real-world deployment within medical applications and other safety-critical domains.The first main contribution of the thesis is the formulation and development of energy-based probabilistic regression. This is a general and conceptually simple regression framework with a clear probabilistic interpretation, using energy-based models to represent the true conditional target distribution. The framework is applied to a number of regression problems and demonstrates particularly strong performance for 2D bounding box regression, improving the state-of-the-art when applied to the task of visual tracking.The second main contribution is a critical evaluation of various uncertainty estimation methods. A general introduction to the problem of estimating the predictive uncertainty of deep models is first provided, together with an extensive comparison of the two popular methods ensembling and MC-dropout. A number of regression uncertainty estimation methods are then further evaluated, specifically examining their reliability under real-world distribution shifts. This evaluation uncovers important limitations of current methods and serves as a challenge to the research community. It demonstrates that more work is required in order to develop truly reliable uncertainty estimation methods for regression.
  •  
5.
  • Jidling, Carl, 1992- (författare)
  • Tailoring Gaussian processes and large-scale optimisation
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis is centred around Gaussian processes and large-scale optimisation, where the main contributions are presented in the included papers.Provided access to linear constraints (e.g. equilibrium conditions), we propose a constructive procedure to design the covariance function in a Gaussian process. The constraints are thereby explicitly incorporated with guaranteed fulfilment. One such construction is successfully applied to strain field reconstruction, where the goal is to describe the interior of a deformed object. Furthermore, we analyse the Gaussian process as a tool for X-ray computed tomography, a field of high importance primarily due to its central role in medical treatments. This provides insightful interpretations of traditional reconstruction algorithms. Large-scale optimisation is considered in two different contexts. First, we consider a stochastic environment, for which we suggest a new method inspired by the quasi-Newton framework. Promising results are demonstrated on real world benchmark problems. Secondly, we suggest an approach to solve an applied deterministic optimisation problem that arises within the design of electrical circuit boards. We reduce the memory requirements through a tailored algorithm, while also benefiting from other parts of the setting to ensure a high computational efficiency. The final paper scrutinises a publication from the early phase of the COVID-19 pandemic, in which the aim was to assess the effectiveness of different governmental interventions. We show that minor modifications in the input data have important impact on the results, and we argue that great caution is necessary when such models are used as a support for decision making.
  •  
6.
  • Kudlicka, Jan (författare)
  • Probabilistic Programming for Birth-Death Models of Evolution
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Phylogenetic birth-death models constitute a family of generative models of evolution. In these models an evolutionary process starts with a single species at a certain time in the past, and the speciations—splitting one species into two descendant species—and extinctions are modeled as events of non-homogenous Poisson processes. Different birth-death models admit different types of changes to the speciation and extinction rates.The result of an evolutionary process is a binary tree called a phylogenetic tree, or phylogeny, with the root representing the single species at the origin,  internal nodes speciation events, and leaves currently living—extant—species (in the present time) and extinction events (in the past). Usually only a part of this tree, corresponding to the evolution of the extant species and their ancestors, is known via reconstruction from e.g. genomic sequences of these extant species.The task of our interest is to estimate the parameters of birth-death models given this reconstructed tree as the observation. While encoding the generative birth-death models as computer programs is easy and straightforward, developing and implementing bespoke inference algorithms are not. This complicates prototyping, development, and deployment of new birth-death models.Probabilistic programming is a new approach in which the generative models are encoded as computer programs in languages that include support for random variables, conditioning on the observed data, as well as automatic inference. This thesis is based on a collection of papers in which we demonstrate how to use probabilistic programming to solve the above-mentioned task of parameter inference in birth-death models. We show how these models can be implemented as simple programs in probabilistic programming languages. Our contribution also includes general improvements of the automatic inference methods.
  •  
7.
  • Lima, Emilly M., et al. (författare)
  • Deep neural network-estimated electrocardiographic age as a mortality predictor
  • 2021
  • Ingår i: Nature Communications. - : Springer Nature. - 2041-1723. ; 12:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The electrocardiogram (ECG) is the most commonly used exam for the screening and evaluation of cardiovascular diseases. Here, the authors propose that the age predicted by artificial intelligence from the raw ECG tracing can be a measure of cardiovascular health and provide prognostic information. The electrocardiogram (ECG) is the most commonly used exam for the evaluation of cardiovascular diseases. Here we propose that the age predicted by artificial intelligence (AI) from the raw ECG (ECG-age) can be a measure of cardiovascular health. A deep neural network is trained to predict a patient's age from the 12-lead ECG in the CODE study cohort (n = 1,558,415 patients). On a 15% hold-out split, patients with ECG-age more than 8 years greater than the chronological age have a higher mortality rate (hazard ratio (HR) 1.79, p < 0.001), whereas those with ECG-age more than 8 years smaller, have a lower mortality rate (HR 0.78, p < 0.001). Similar results are obtained in the external cohorts ELSA-Brasil (n = 14,236) and SaMi-Trop (n = 1,631). Moreover, even for apparent normal ECGs, the predicted ECG-age gap from the chronological age remains a statistically significant risk predictor. These results show that the AI-enabled analysis of the ECG can add prognostic information.
  •  
8.
  • Murray, Lawrence, et al. (författare)
  • Delayed sampling and automatic Rao-Blackwellization of probabilistic programs
  • 2018
  • Ingår i: Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), Lanzarote, Spain, April, 2018. - : PMLR.
  • Konferensbidrag (refereegranskat)abstract
    • We introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, using conjugate priors and affine transformations to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, this automatically yields improvements such as locallyoptimal proposals and Rao–Blackwellization. The mechanism maintains a directed graph alongside the running program that evolves dynamically as operations are triggered upon it. Nodes of the graph represent random variables, edges the analytically-tractable relationships between them. Random variables remain in the graph for as long as possible, to be sampled only when they are used by the program in a way that cannot be resolved analytically. In the meantime, they are conditioned on as many observations as possible. We demonstrate the mechanism with a few pedagogical examples, as well as a linearnonlinear state-space model with simulated data, and an epidemiological model with real data of a dengue outbreak in Micronesia. In all cases one or more variables are automatically marginalized out to significantly reduce variance in estimates of the marginal likelihood, in the final case facilitating a randomweight or pseudo-marginal-type importance sampler for parameter estimation. We have implemented the approach in Anglican and a new probabilistic programming language called Birch.
  •  
9.
  • Osama, Muhammad (författare)
  • Robust machine learning methods
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • We are surrounded by data in our daily lives. The rent of our houses, the amount of electricity units consumed, the prices of different products at a supermarket, the daily temperature, our medicine prescriptions, our internet search history are all different forms of data. Data can be used in a wide range of applications. For example, one can use data to predict product prices in the future; to predict tomorrow's temperature; to recommend videos; or suggest better prescriptions. However in order to do the above, one is required to learn a model from data. A model is a mathematical description of how the phenomena we are interested in behaves e.g. how does the temperature vary? Is it periodic? What kinds of patterns does it have? Machine learning is about this process of learning models from data by building on disciplines such as statistics and optimization. Learning models comes with many different challenges. Some challenges are related to how flexible the model is, some are related to the size of data, some are related to computational efficiency etc. One of the challenges is that of data outliers. For instance, due to war in a country exports could stop and there could be a sudden spike in prices of different products. This sudden jump in prices is an outlier or corruption to the normal situation and must be accounted for when learning the model. Another challenge could be that data is collected in one situation but the model is to be used in another situation. For example, one might have data on vaccine trials where the participants were mostly old people. But one might want to make a decision on whether to use the vaccine or not for the whole population that contains people of all age groups. So one must also account for this difference when learning models because the conclusion drawn may not be valid for the young people in the population. Yet another challenge  could arise when data is collected from different sources or contexts. For example, a shopkeeper might have data on sales of paracetamol when there was flu and when there was no flu and she might want to decide how much paracetamol to stock for the next month. In this situation, it is difficult to know whether there will be a flu next month or not and so deciding on how much to stock is a challenge. This thesis tries to address these and other similar challenges.In paper I, we address the challenge of data corruption i.e., learning models in a robust way when some fraction of the data is corrupted. In paper II, we apply the methodology of paper I to the problem of localization in wireless networks. Paper III addresses the challenge of estimating causal effect between an exposure and an outcome variable from spatially collected data (e.g. whether increasing number of police personnel in an area reduces number of crimes there). Paper IV addresses the challenge of learning improved decision policies e.g. which treatment to assign to which patient given past data on treatment assignments. In paper V, we look at the challenge of learning models when data is acquired from different contexts and the future context is unknown. In paper VI, we address the challenge of predicting count data across space e.g. number of crimes in an area and quantify its uncertainty. In paper VII, we address the challenge of learning models when data points arrive in a streaming fashion i.e., point by point. The proposed method enables online training and also yields some robustness properties.
  •  
10.
  • Sundström, Johan, Professor, 1971-, et al. (författare)
  • Machine Learning in Risk Prediction
  • 2020
  • Ingår i: Hypertension. - 0194-911X .- 1524-4563. ; 75:5, s. 1165-1166
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 98
Typ av publikation
konferensbidrag (40)
tidskriftsartikel (39)
doktorsavhandling (7)
annan publikation (6)
licentiatavhandling (3)
forskningsöversikt (2)
visa fler...
bok (1)
visa färre...
Typ av innehåll
refereegranskat (77)
övrigt vetenskapligt/konstnärligt (21)
Författare/redaktör
Schön, Thomas B., Pr ... (97)
Gustafsson, Fredrik ... (13)
Horta Ribeiro, Antôn ... (10)
Gedon, Daniel, 1994- (10)
Wahlström, Niklas, 1 ... (9)
Wills, Adrian G. (9)
visa fler...
Sjölund, Jens, Biträ ... (7)
Danelljan, Martin (7)
Lindsten, Fredrik (6)
Zhao, Zheng (6)
Umenberger, Jack (6)
Ribeiro, Antônio H. (6)
Luo, Ziwei (5)
Wills, Adrian (5)
Jidling, Carl (5)
Kudlicka, Jan (5)
Hjalmarsson, Håkan, ... (4)
Zachariah, Dave (4)
Andersson, Carl (4)
Ferizbegovic, Mina (4)
Sjölund, Jens (4)
Gunnarsson, Niklas (4)
Murray, Lawrence M. (4)
Sundström, Johan, Pr ... (3)
Tiels, Koen (3)
Ninness, Brett (3)
Courts, Jarrad (3)
Hendriks, Johannes (3)
Hendriks, Johannes N ... (3)
Murray, Lawrence (3)
Lindholm, Andreas (3)
Lampa, Erik, 1977- (2)
Ljung, Lennart (2)
Lindsten, Fredrik, 1 ... (2)
Gustafsson, Stefan (2)
Ronquist, Fredrik (2)
Ribeiro, Antonio Lui ... (2)
Mattsson, Per (2)
Borgström, Johannes (2)
Bijl, Hildo (2)
Taghia, Jalil (2)
Särkkä, Simo (2)
Giatti, Luana (2)
Kimstrand, Peter (2)
Aguirre, Luis A. (2)
Paixao, Gabriela M. ... (2)
Horta Ribeiro, Manoe ... (2)
Gomes, Paulo R. (2)
Oliveira, Derick M. (2)
Meira Jr, Wagner (2)
visa färre...
Lärosäte
Uppsala universitet (96)
Linköpings universitet (13)
Kungliga Tekniska Högskolan (7)
Lunds universitet (1)
Karolinska Institutet (1)
Språk
Engelska (98)
Forskningsämne (UKÄ/SCB)
Teknik (59)
Naturvetenskap (41)
Medicin och hälsovetenskap (8)
Samhällsvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy