SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Schön Thomas B. Professor 1977 ) "

Search: WFRF:(Schön Thomas B. Professor 1977 )

  • Result 1-25 of 99
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Gustafsson, Stefan, et al. (author)
  • Development and validation of deep learning ECG-based prediction of myocardial infarction in emergency department patients
  • 2022
  • In: Scientific Reports. - : Springer Nature. - 2045-2322. ; 12
  • Journal article (peer-reviewed)abstract
    • Myocardial infarction diagnosis is a common challenge in the emergency department. In managed settings, deep learning-based models and especially convolutional deep models have shown promise in electrocardiogram (ECG) classification, but there is a lack of high-performing models for the diagnosis of myocardial infarction in real-world scenarios. We aimed to train and validate a deep learning model using ECGs to predict myocardial infarction in real-world emergency department patients. We studied emergency department patients in the Stockholm region between 2007 and 2016 that had an ECG obtained because of their presenting complaint. We developed a deep neural network based on convolutional layers similar to a residual network. Inputs to the model were ECG tracing, age, and sex; and outputs were the probabilities of three mutually exclusive classes: non-ST-elevation myocardial infarction (NSTEMI), ST-elevation myocardial infarction (STEMI), and control status, as registered in the SWEDEHEART and other registries. We used an ensemble of five models. Among 492,226 ECGs in 214,250 patients, 5,416 were recorded with an NSTEMI, 1,818 a STEMI, and 485,207 without a myocardial infarction. In a random test set, our model could discriminate STEMIs/NSTEMIs from controls with a C-statistic of 0.991/0.832 and had a Brier score of 0.001/0.008. The model obtained a similar performance in a temporally separated test set of the study sample, and achieved a C-statistic of 0.985 and a Brier score of 0.002 in discriminating STEMIs from controls in an external test set. We developed and validated a deep learning model with excellent performance in discriminating between control, STEMI, and NSTEMI on the presenting ECG of a real-world sample of the important population of all-comers to the emergency department. Hence, deep learning models for ECG decision support could be valuable in the emergency department.
  •  
2.
  •  
3.
  • Andersson, Carl (author)
  • Deep probabilistic models for sequential and hierarchical data
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • Consider the problem where we want a computer program capable of recognizing a pedestrian on the road. This could be employed in a car to automatically apply the brakes to avoid an accident. Writing such a program is immensely difficult but what if we could instead use examples and let the program learn what characterizes a pedestrian from the examples. Machine learning can be described as the process of teaching a model (computer program) to predict something (the presence of a pedestrian) with help of data (examples) instead of through explicit programming.This thesis focuses on a specific method in machine learning, called deep learning. This method can arguably be seen as sole responsible for the recent upswing of machine learning in academia as well as in society at large. However, deep learning requires, in human standards, a huge amount of data to perform well which can be a limiting factor.  In this thesis we describe different approaches to reduce the amount of data that is needed by encoding some of our prior knowledge about the problem into the model. To this end we focus on sequential and hierarchical data, such as speech and written language.Representing sequential output is in general difficult due to the complexity of the output space. Here, we make use of a probabilistic approach focusing on sequential models in combination with a deep learning structure called the variational autoencoder. This is applied to a range of different problem settings, from system identification to speech modeling.The results come in three parts. The first contribution focus on applications of deep learning to typical system identification problems, the intersection between the two areas and how they can benefit from each other. The second contribution is on hierarchical data where we promote a multiscale variational autoencoder inspired by image modeling. The final contribution is on verification of probabilistic models, in particular how to evaluate the validity of a probabilistic output, also known as calibration.
  •  
4.
  • Gustafsson, Fredrik K., 1993- (author)
  • Towards Accurate and Reliable Deep Regression Models
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • Regression is a fundamental machine learning task with many important applications within computer vision and other domains. In general, it entails predicting continuous targets from given inputs. Deep learning has become the dominant paradigm within machine learning in recent years, and a wide variety of different techniques have been employed to solve regression problems using deep models. There is however no broad consensus on how deep regression models should be constructed for best possible accuracy, or how the uncertainty in their predictions should be represented and estimated. These open questions are studied in this thesis, aiming to help take steps towards an ultimate goal of developing deep regression models which are both accurate and reliable enough for real-world deployment within medical applications and other safety-critical domains.The first main contribution of the thesis is the formulation and development of energy-based probabilistic regression. This is a general and conceptually simple regression framework with a clear probabilistic interpretation, using energy-based models to represent the true conditional target distribution. The framework is applied to a number of regression problems and demonstrates particularly strong performance for 2D bounding box regression, improving the state-of-the-art when applied to the task of visual tracking.The second main contribution is a critical evaluation of various uncertainty estimation methods. A general introduction to the problem of estimating the predictive uncertainty of deep models is first provided, together with an extensive comparison of the two popular methods ensembling and MC-dropout. A number of regression uncertainty estimation methods are then further evaluated, specifically examining their reliability under real-world distribution shifts. This evaluation uncovers important limitations of current methods and serves as a challenge to the research community. It demonstrates that more work is required in order to develop truly reliable uncertainty estimation methods for regression.
  •  
5.
  • Jidling, Carl, 1992- (author)
  • Tailoring Gaussian processes and large-scale optimisation
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • This thesis is centred around Gaussian processes and large-scale optimisation, where the main contributions are presented in the included papers.Provided access to linear constraints (e.g. equilibrium conditions), we propose a constructive procedure to design the covariance function in a Gaussian process. The constraints are thereby explicitly incorporated with guaranteed fulfilment. One such construction is successfully applied to strain field reconstruction, where the goal is to describe the interior of a deformed object. Furthermore, we analyse the Gaussian process as a tool for X-ray computed tomography, a field of high importance primarily due to its central role in medical treatments. This provides insightful interpretations of traditional reconstruction algorithms. Large-scale optimisation is considered in two different contexts. First, we consider a stochastic environment, for which we suggest a new method inspired by the quasi-Newton framework. Promising results are demonstrated on real world benchmark problems. Secondly, we suggest an approach to solve an applied deterministic optimisation problem that arises within the design of electrical circuit boards. We reduce the memory requirements through a tailored algorithm, while also benefiting from other parts of the setting to ensure a high computational efficiency. The final paper scrutinises a publication from the early phase of the COVID-19 pandemic, in which the aim was to assess the effectiveness of different governmental interventions. We show that minor modifications in the input data have important impact on the results, and we argue that great caution is necessary when such models are used as a support for decision making.
  •  
6.
  • Kudlicka, Jan (author)
  • Probabilistic Programming for Birth-Death Models of Evolution
  • 2021
  • Doctoral thesis (other academic/artistic)abstract
    • Phylogenetic birth-death models constitute a family of generative models of evolution. In these models an evolutionary process starts with a single species at a certain time in the past, and the speciations—splitting one species into two descendant species—and extinctions are modeled as events of non-homogenous Poisson processes. Different birth-death models admit different types of changes to the speciation and extinction rates.The result of an evolutionary process is a binary tree called a phylogenetic tree, or phylogeny, with the root representing the single species at the origin,  internal nodes speciation events, and leaves currently living—extant—species (in the present time) and extinction events (in the past). Usually only a part of this tree, corresponding to the evolution of the extant species and their ancestors, is known via reconstruction from e.g. genomic sequences of these extant species.The task of our interest is to estimate the parameters of birth-death models given this reconstructed tree as the observation. While encoding the generative birth-death models as computer programs is easy and straightforward, developing and implementing bespoke inference algorithms are not. This complicates prototyping, development, and deployment of new birth-death models.Probabilistic programming is a new approach in which the generative models are encoded as computer programs in languages that include support for random variables, conditioning on the observed data, as well as automatic inference. This thesis is based on a collection of papers in which we demonstrate how to use probabilistic programming to solve the above-mentioned task of parameter inference in birth-death models. We show how these models can be implemented as simple programs in probabilistic programming languages. Our contribution also includes general improvements of the automatic inference methods.
  •  
7.
  • Lima, Emilly M., et al. (author)
  • Deep neural network-estimated electrocardiographic age as a mortality predictor
  • 2021
  • In: Nature Communications. - : Springer Nature. - 2041-1723. ; 12:1
  • Journal article (peer-reviewed)abstract
    • The electrocardiogram (ECG) is the most commonly used exam for the screening and evaluation of cardiovascular diseases. Here, the authors propose that the age predicted by artificial intelligence from the raw ECG tracing can be a measure of cardiovascular health and provide prognostic information. The electrocardiogram (ECG) is the most commonly used exam for the evaluation of cardiovascular diseases. Here we propose that the age predicted by artificial intelligence (AI) from the raw ECG (ECG-age) can be a measure of cardiovascular health. A deep neural network is trained to predict a patient's age from the 12-lead ECG in the CODE study cohort (n = 1,558,415 patients). On a 15% hold-out split, patients with ECG-age more than 8 years greater than the chronological age have a higher mortality rate (hazard ratio (HR) 1.79, p < 0.001), whereas those with ECG-age more than 8 years smaller, have a lower mortality rate (HR 0.78, p < 0.001). Similar results are obtained in the external cohorts ELSA-Brasil (n = 14,236) and SaMi-Trop (n = 1,631). Moreover, even for apparent normal ECGs, the predicted ECG-age gap from the chronological age remains a statistically significant risk predictor. These results show that the AI-enabled analysis of the ECG can add prognostic information.
  •  
8.
  • Murray, Lawrence, et al. (author)
  • Delayed sampling and automatic Rao-Blackwellization of probabilistic programs
  • 2018
  • In: Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), Lanzarote, Spain, April, 2018. - : PMLR.
  • Conference paper (peer-reviewed)abstract
    • We introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, using conjugate priors and affine transformations to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, this automatically yields improvements such as locallyoptimal proposals and Rao–Blackwellization. The mechanism maintains a directed graph alongside the running program that evolves dynamically as operations are triggered upon it. Nodes of the graph represent random variables, edges the analytically-tractable relationships between them. Random variables remain in the graph for as long as possible, to be sampled only when they are used by the program in a way that cannot be resolved analytically. In the meantime, they are conditioned on as many observations as possible. We demonstrate the mechanism with a few pedagogical examples, as well as a linearnonlinear state-space model with simulated data, and an epidemiological model with real data of a dengue outbreak in Micronesia. In all cases one or more variables are automatically marginalized out to significantly reduce variance in estimates of the marginal likelihood, in the final case facilitating a randomweight or pseudo-marginal-type importance sampler for parameter estimation. We have implemented the approach in Anglican and a new probabilistic programming language called Birch.
  •  
9.
  • Osama, Muhammad (author)
  • Robust machine learning methods
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • We are surrounded by data in our daily lives. The rent of our houses, the amount of electricity units consumed, the prices of different products at a supermarket, the daily temperature, our medicine prescriptions, our internet search history are all different forms of data. Data can be used in a wide range of applications. For example, one can use data to predict product prices in the future; to predict tomorrow's temperature; to recommend videos; or suggest better prescriptions. However in order to do the above, one is required to learn a model from data. A model is a mathematical description of how the phenomena we are interested in behaves e.g. how does the temperature vary? Is it periodic? What kinds of patterns does it have? Machine learning is about this process of learning models from data by building on disciplines such as statistics and optimization. Learning models comes with many different challenges. Some challenges are related to how flexible the model is, some are related to the size of data, some are related to computational efficiency etc. One of the challenges is that of data outliers. For instance, due to war in a country exports could stop and there could be a sudden spike in prices of different products. This sudden jump in prices is an outlier or corruption to the normal situation and must be accounted for when learning the model. Another challenge could be that data is collected in one situation but the model is to be used in another situation. For example, one might have data on vaccine trials where the participants were mostly old people. But one might want to make a decision on whether to use the vaccine or not for the whole population that contains people of all age groups. So one must also account for this difference when learning models because the conclusion drawn may not be valid for the young people in the population. Yet another challenge  could arise when data is collected from different sources or contexts. For example, a shopkeeper might have data on sales of paracetamol when there was flu and when there was no flu and she might want to decide how much paracetamol to stock for the next month. In this situation, it is difficult to know whether there will be a flu next month or not and so deciding on how much to stock is a challenge. This thesis tries to address these and other similar challenges.In paper I, we address the challenge of data corruption i.e., learning models in a robust way when some fraction of the data is corrupted. In paper II, we apply the methodology of paper I to the problem of localization in wireless networks. Paper III addresses the challenge of estimating causal effect between an exposure and an outcome variable from spatially collected data (e.g. whether increasing number of police personnel in an area reduces number of crimes there). Paper IV addresses the challenge of learning improved decision policies e.g. which treatment to assign to which patient given past data on treatment assignments. In paper V, we look at the challenge of learning models when data is acquired from different contexts and the future context is unknown. In paper VI, we address the challenge of predicting count data across space e.g. number of crimes in an area and quantify its uncertainty. In paper VII, we address the challenge of learning models when data points arrive in a streaming fashion i.e., point by point. The proposed method enables online training and also yields some robustness properties.
  •  
10.
  • Sundström, Johan, Professor, 1971-, et al. (author)
  • Machine Learning in Risk Prediction
  • 2020
  • In: Hypertension. - 0194-911X .- 1524-4563. ; 75:5, s. 1165-1166
  • Journal article (other academic/artistic)
  •  
11.
  • Ancuti, Codruta O., et al. (author)
  • NTIRE 2023 HR NonHomogeneous Dehazing Challenge Report
  • 2023
  • In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). - Vancover : Institute of Electrical and Electronics Engineers (IEEE).
  • Conference paper (peer-reviewed)abstract
    • This study assesses the outcomes of the NTIRE 2023 Challenge on Non-Homogeneous Dehazing, wherein novel techniques were proposed and evaluated on new image dataset called HD-NH-HAZE. The HD-NH-HAZE dataset contains 50 high resolution pairs of real-life outdoor images featuring nonhomogeneous hazy images and corresponding haze-free images of the same scene. The nonhomogeneous haze was simulated using a professional setup that replicated real-world conditions of hazy scenarios. The competition had 246 participants and 17 teams that competed in the final testing phase, and the proposed solutions demonstrated the cutting-edge in image dehazing technology.
  •  
12.
  • Andersson, Carl (author)
  • Deep learning applied to system identification : A probabilistic approach
  • 2019
  • Licentiate thesis (other academic/artistic)abstract
    • Machine learning has been applied to sequential data for a long time in the field of system identification. As deep learning grew under the late 00's machine learning was again applied to sequential data but from a new angle, not utilizing much of the knowledge from system identification. Likewise, the field of system identification has yet to adopt many of the recent advancements in deep learning. This thesis is a response to that. It introduces the field of deep learning in a probabilistic machine learning setting for problems known from system identification.Our goal for sequential modeling within the scope of this thesis is to obtain a model with good predictive and/or generative capabilities. The motivation behind this is that such a model can then be used in other areas, such as control or reinforcement learning. The model could also be used as a stepping stone for machine learning problems or for pure recreational purposes.Paper I and Paper II focus on how to apply deep learning to common system identification problems. Paper I introduces a novel way of regularizing the impulse response estimator for a system. In contrast to previous methods using Gaussian processes for this regularization we propose to parameterize the regularization with a neural network and train this using a large dataset. Paper II introduces deep learning and many of its core concepts for a system identification audience. In the paper we also evaluate several contemporary deep learning models on standard system identification benchmarks. Paper III is the odd fish in the collection in that it focuses on the mathematical formulation and evaluation of calibration in classification especially for deep neural network. The paper proposes a new formalized notation for calibration and some novel ideas for evaluation of calibration. It also provides some experimental results on calibration evaluation.
  •  
13.
  • Andersson, Carl R., et al. (author)
  • Learning deep autoregressive models for hierarchical data
  • 2021
  • In: IFAC PapersOnLine. - : Elsevier. - 2405-8963. ; , s. 529-534
  • Conference paper (peer-reviewed)abstract
    • We propose a model for hierarchical structured data as an extension to the stochastic temporal convolutional network. The proposed model combines an autoregressive model with a hierarchical variational autoencoder and downsampling to achieve superior computational complexity. We evaluate the proposed model on two different types of sequential data: speech and handwritten text. The results are promising with the proposed model achieving state-of-the-art performance.
  •  
14.
  • Andersson Naesseth, Christian, et al. (author)
  • High-Dimensional Filtering Using Nested Sequential Monte Carlo
  • 2019
  • In: IEEE Transactions on Signal Processing. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 1053-587X .- 1941-0476. ; 67:16, s. 4177-4188
  • Journal article (peer-reviewed)abstract
    • Sequential Monte Carlo (SMC) methods comprise one of the most successful approaches to approximate Bayesian filtering. However, SMC without a good proposal distribution can perform poorly, in particular in high dimensions. We propose nested sequential Monte Carlo, a methodology that generalizes the SMC framework by requiring only approximate, properly weighted, samples from the SMC proposal distribution, while still resulting in a correctSMCalgorithm. This way, we can compute an "exact approximation" of, e. g., the locally optimal proposal, and extend the class of models forwhichwe can perform efficient inference using SMC. We showimproved accuracy over other state-of-the-art methods on several spatio-temporal state-space models.
  •  
15.
  • Baumann, Dominik, et al. (author)
  • On the trade-off between event-based and periodic state estimation under bandwidth constraints
  • 2023
  • In: IFAC-PapersOnLine. - : Elsevier. - 2405-8963. ; 56:2, s. 5275-5280
  • Journal article (peer-reviewed)abstract
    • Event-based methods carefully select when to transmit information to enable high-performance control and estimation over resource-constrained communication networks. However, they come at a cost. For instance, event-based communication induces a higher computational load and increases the complexity of the scheduling problem. Thus, in some cases, allocating available slots to agents periodically in circular order may be superior. In this article, we discuss, for a specific example, when the additional complexity of event-based methods is beneficial. We evaluate our analysis in a synthetical example and on 20 simulated cart-pole systems.
  •  
16.
  • Baumann, Dominik, Ph.D. 1991-, et al. (author)
  • Safe Reinforcement Learning in Uncertain Contexts
  • 2024
  • In: IEEE Transactions on robotics. - : IEEE. - 1552-3098 .- 1941-0468. ; 40, s. 1828-1841
  • Journal article (peer-reviewed)abstract
    • When deploying machine learning algorithms in the real world, guaranteeing safety is an essential asset. Existing safe learning approaches typically consider continuous variables, i.e., regression tasks. However, in practice, robotic systems are also subject to discrete, external environmental changes, e.g., having to carry objects of certain weights or operating on frozen, wet, or dry surfaces. Such influences can be modeled as discrete context variables. In the existing literature, such contexts are, if considered, mostly assumed to be known. In this work, we drop this assumption and show how we can perform safe learning when we cannot directly measure the context variables. To achieve this, we derive frequentist guarantees for multiclass classification, allowing us to estimate the current context from measurements. Furthermore, we propose an approach for identifying contexts through experiments. We discuss under which conditions we can retain theoretical guarantees and demonstrate the applicability of our algorithm on a Furuta pendulum with camera measurements of different weights that serve as contexts.
  •  
17.
  • Bijl, Hildo, et al. (author)
  • Optimal controller/observer gains of discounted-cost LQG systems
  • 2019
  • In: Automatica. - : Elsevier. - 0005-1098 .- 1873-2836. ; 101, s. 471-474
  • Journal article (peer-reviewed)abstract
    • The linear-quadratic-Gaussian (LQG) control paradigm is well-known in literature. The strategy of minimizing the cost function is available, both for the case where the state is known and where it is estimated through an observer. The situation is different when the cost function has an exponential discount factor, also known as a prescribed degree of stability. In this case, the optimal control strategy is only available when the state is known. This paper builds onward from that result, deriving an optimal control strategy when working with an estimated state. Expressions for the resulting optimal expected cost are also given. 
  •  
18.
  •  
19.
  • Bånkestad, Maria, et al. (author)
  • Variational Elliptical Processes
  • 2023
  • In: Transactions on Machine Learning Research. - 2835-8856.
  • Journal article (peer-reviewed)abstract
    • We present elliptical processes—a family of non-parametric probabilistic models that subsumes Gaussian processes and Student's t processes. This generalization includes a range of new heavy-tailed behaviors while retaining computational tractability. Elliptical processes are based on a representation of elliptical distributions as a continuous mixture of Gaussian distributions. We parameterize this mixture distribution as a spline normalizing flow, which we train using variational inference. The proposed form of the variational posterior enables a sparse variational elliptical process applicable to large-scale problems. We highlight advantages compared to Gaussian processes through regression and classification experiments. Elliptical processes can supersede Gaussian processes in several settings, including cases where the likelihood is non-Gaussian or when accurate tail modeling is essential.
  •  
20.
  • Carlsson, Håkan, et al. (author)
  • Quantifying the Uncertainty of the Relative Geometry in Inertial Sensors Arrays
  • 2021
  • In: IEEE Sensors Journal. - : Institute of Electrical and Electronics Engineers (IEEE). - 1530-437X .- 1558-1748. ; 21:17, s. 19362-19373
  • Journal article (peer-reviewed)abstract
    • We present an algorithm to estimate and quantify the uncertainty of the accelerometers' relative geometry in an inertial sensor array. We formulate the calibration problem as a Bayesian estimation problem and propose an algorithm that samples the accelerometer positions' posterior distribution using Markov chain Monte Carlo. By identifying linear substructures of the measurement model, the unknown linear motion parameters are analytically marginalized, and the remaining non-linear motion parameters are numerically marginalized. The numerical marginalization occurs in a low dimensional space where the gyroscopes give information about the motion. This combination of information from gyroscopes and analytical marginalization allows the user to make no assumptions of the motion before the calibration. It thus enables the user to estimate the accelerometer positions' relative geometry by simply exposing the array to arbitrary twisting motion. We show that the calibration algorithm gives good results on both simulated and experimental data, despite sampling a high dimensional space.
  •  
21.
  • Conde, Marcus V., et al. (author)
  • Lens-to-Lens Bokeh Effect Transformation : NTIRE 2023 Challenge Report
  • 2023
  • In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. - Vancover : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1643-1659
  • Conference paper (other academic/artistic)abstract
    • We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge. Recent advancements of mobile photography aim to reach the visual quality of full-frame cameras. Now, a goal in computational photography is to optimize the Bokeh effect itself, which is the aesthetic quality of the blur in out-of-focus areas of an image. Photographers create this aesthetic effect by benefiting from the lens optical properties. The aim of this work is to design a neural network capable of converting the the Bokeh effect of one lens to the effect of another lens without harming the sharp foreground regions in the image. For a given input image, knowing the target lens type, we render or transform the Bokeh effect accordingly to the lens properties. We build the BETD using two full-frame Sony cameras, and diverse lens setups. To the best of our knowledge, we are the first attempt to solve this novel task, and we provide the first BETD dataset and benchmark for it. The challenge had 99 registered participants. The submitted methods gauge the state-of-the-art in Bokeh effect rendering and transformation.
  •  
22.
  • Courts, Jarrad, et al. (author)
  • Gaussian Variational State Estimation for Nonlinear State-Space Models
  • 2021
  • In: IEEE Transactions on Signal Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-587X .- 1941-0476. ; 69, s. 5979-5993
  • Journal article (peer-reviewed)abstract
    • In this paper, the problem of state estimation, in the context of both filtering and smoothing, for nonlinear state-space models is considered. Due to the nonlinear nature of the models, the state estimation problem is generally intractable as it involves integrals of general nonlinear functions and the filtered and smoothed state distributions lack closed-form solutions. As such, it is common to approximate the state estimation problem. In this paper, we develop an assumed Gaussian solution based on variational inference, which offers the key advantage of a flexible, but principled, mechanism for approximating the required distributions. Our main contribution lies in a new formulation of the state estimation problem as an optimisation problem, which can then be solved using standard optimisation routines that employ exact first- and second-order derivatives. The resulting state estimation approach involves a minimal number of assumptions and applies directly to nonlinear systems with both Gaussian and non-Gaussian probabilistic models. The performance of our approach is demonstrated on several examples; a challenging scalar system, a model of a simple robotic system, and a target tracking problem using a von Mises-Fisher distribution and outperforms alternative assumed Gaussian approaches to state estimation.
  •  
23.
  • Courts, Jarrad, et al. (author)
  • Variational State and Parameter Estimation
  • 2021
  • In: IFAC PapersOnLine. - : Elsevier. - 2405-8963. ; , s. 732-737
  • Conference paper (peer-reviewed)abstract
    • This paper considers the problem of computing Bayesian estimates of both states and model parameters for nonlinear state-space models. Generally, this problem does not have a tractable solution and approximations must be utilised. In this work, a variational approach is used to provide an assumed density which approximates the desired, intractable, distribution. The approach is deterministic and results in an optimisation problem of a standard form. Due to the parametrisation of the assumed density selected first- and second-order derivatives are readily available which allows for efficient solutions. The proposed method is compared against state-of-the-art Hamiltonian Monte Carlo in two numerical examples.
  •  
24.
  • Courts, Jarrad, et al. (author)
  • Variational system identification for nonlinear state-space models
  • 2023
  • In: Automatica. - : Elsevier. - 0005-1098 .- 1873-2836. ; 147
  • Journal article (peer-reviewed)abstract
    • This paper considers parameter estimation for nonlinear state-space models, which is an important but challenging problem. We address this challenge by employing a variational inference (VI) approach, which is a principled method that has deep connections to maximum likelihood estimation. This VI approach ultimately provides estimates of the model as solutions to an optimisation problem, which is deterministic, tractable and can be solved using standard optimisation tools. A specialisation of this approach for systems with additive Gaussian noise is also detailed. The proposed method is examined numerically on a range of simulated and real examples focusing on the robustness to parameter initialisation; additionally, favourable comparisons are performed against state-of-the-art alternatives.
  •  
25.
  • Dahlin, Johan, 1986-, et al. (author)
  • Getting started with particle Metropolis-Hastings for inference in nonlinear dynamical models
  • 2019
  • In: Journal of Statistical Software. - Alexandria, VA, United States : American Statistical Association. - 1548-7660. ; 88:CN2, s. 1-41
  • Journal article (peer-reviewed)abstract
    • This tutorial provides a gentle introduction to the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state-space models together with a software implementation in the statistical programming language R. We employ a step-by-step approach to develop an implementation of the PMH algorithm (and the particle filter within) together with the reader. This final implementation is also available as the package pmhtutorial in the CRAN repository. Throughout the tutorial, we provide some intuition as to how the algorithm operates and discuss some solutions to problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian state-space model with synthetic data and a nonlinear stochastic volatility model with real-world data.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 99
Type of publication
conference paper (40)
journal article (40)
doctoral thesis (7)
other publication (6)
licentiate thesis (3)
research review (2)
show more...
book (1)
show less...
Type of content
peer-reviewed (78)
other academic/artistic (21)
Author/Editor
Schön, Thomas B., Pr ... (98)
Gustafsson, Fredrik ... (13)
Horta Ribeiro, Antôn ... (10)
Gedon, Daniel, 1994- (10)
Wahlström, Niklas, 1 ... (9)
Wills, Adrian G. (9)
show more...
Sjölund, Jens, Biträ ... (7)
Danelljan, Martin (7)
Lindsten, Fredrik (6)
Zhao, Zheng (6)
Umenberger, Jack (6)
Ribeiro, Antônio H. (6)
Luo, Ziwei (5)
Wills, Adrian (5)
Jidling, Carl (5)
Kudlicka, Jan (5)
Hjalmarsson, Håkan, ... (4)
Zachariah, Dave (4)
Andersson, Carl (4)
Ferizbegovic, Mina (4)
Sjölund, Jens (4)
Gunnarsson, Niklas (4)
Murray, Lawrence M. (4)
Sundström, Johan, Pr ... (3)
Tiels, Koen (3)
Ninness, Brett (3)
Courts, Jarrad (3)
Hendriks, Johannes (3)
Hendriks, Johannes N ... (3)
Murray, Lawrence (3)
Lindholm, Andreas (3)
Lampa, Erik, 1977- (2)
Ljung, Lennart (2)
Lindsten, Fredrik, 1 ... (2)
Gustafsson, Stefan (2)
Ronquist, Fredrik (2)
Ribeiro, Antonio Lui ... (2)
Mattsson, Per (2)
Borgström, Johannes (2)
Bijl, Hildo (2)
Taghia, Jalil (2)
Särkkä, Simo (2)
Giatti, Luana (2)
Kimstrand, Peter (2)
Aguirre, Luis A. (2)
Paixao, Gabriela M. ... (2)
Horta Ribeiro, Manoe ... (2)
Gomes, Paulo R. (2)
Oliveira, Derick M. (2)
Meira Jr, Wagner (2)
show less...
University
Uppsala University (97)
Linköping University (13)
Royal Institute of Technology (7)
Lund University (1)
Karolinska Institutet (1)
Language
English (99)
Research subject (UKÄ/SCB)
Engineering and Technology (60)
Natural sciences (41)
Medical and Health Sciences (8)
Social Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view