SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L4X0:1653 0829 "

Sökning: L4X0:1653 0829

  • Resultat 1-10 av 16
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bayisa, Fekadu Lemessa, 1984- (författare)
  • Statistical methods in medical image estimation and sparse signal recovery
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis presents work on methods for the estimation of computed tomography (CT) images from magnetic resonance (MR) images for a number of diagnostic and therapeutic workflows. The study also demonstrates sparse signal recovery method, which is an intermediate method for magnetic resonance image reconstruction. The thesis consists of four articles. The first three articles are concerned with developing statistical methods for the estimation of CT images from MR images. We formulated spatial and non-spatial models for CT image estimation from MR images, where the spatial models include hidden Markov model (HMM) and hidden Markov random field model (HMRF) while the non-spatial models incorporate Gaussian mixture model (GMM) and skewed-Gaussian mixture model (SGMM). The statistical models are estimated via a maximum likelihood approach using the EM-algorithm in GMM and SGMM, the EM gradient algorithm in HMRF and the Baum–Welch algorithm in HMM. We have also examined CT image estimation using GMM and supervised statistical learning methods. The performance of the models is evaluated using cross-validation on real data. Comparing CT image estimation performance of the models, we have observed that GMM combined with supervised statistical learning method has the best performance, especially on bone tissues. The fourth article deals with a sparse modeling in signal recovery. Using spike and slab priors on the signal, we formulated a sparse signal recovery problem and developed an adaptive algorithm for sparse signal recovery. The developed algorithm has better performance than the recent iterative convex refinement (ICR) algorithm. The methods introduced in this work are contributions to the lattice process and signal processing literature. The results are an input for the research on replacing CT images by synthetic or pseudo-CT images, and for an efficient way of recovering sparse signal.
  •  
2.
  • Belyaev, Yu.K., et al. (författare)
  • On Non-Parametric Estimation of Poission Point Processes Related to Failure Stresses of Fibres
  • 2000
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We consider statistical analysis of the reliability of fibres. The problem is to estimate the distribution law of random failure stresses of fibres (i.e. the critical level of stresses that destroy fibres) by using data obtained in a special kind of test, where several fibres are tested until they break. All new pieces resulting from this test will also be tested, if they are long enough. The test ends when all the remaining pieces are too short to be tested further. We refer to these as binary tree structured tests. We assume that the cumulative hazard function (c.h.f.) of the failure stresses of these fibres is continuous, and that the fibres are statistically identical. Under these assumptions we obtain, as the number of tested fibres increases, a strongly consistent Nelson-Aalen type estimator of the c.h.f. The functional central limit resampling theorem in Skorohod space is proved. It justifies the possibility of using resampling for estimating the accuracy of these estimators. The theorem shows that resampling can be used to asymptotically consistently estimate distribution laws of continuous functionals of the random deviations between the estimator and the true c.h.f.. For example, resampling can be used to estimate the distribution law of the maximum distance between estimators and estimands. Numerical examples suggest that resampling works well for a moderate number of tested fibres.
  •  
3.
  •  
4.
  • Bondesson, Lennart, 1944-, et al. (författare)
  • Probability calculus for silent elimination : A method for medium access control
  • 2007
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • A probability problem arising in the context of medium access control in wireless networks is considered. It is described as a problem with n urns, each one having one ball at time 0. Each ball leaves its urn after a geometrically distributed time. Then there is a first time T such that no departures take place at the times T +1, T +2, . . . , T +k, where k is fixed. The focus is on the probability distribution of (XT , ST , T), where XT is the number of balls that leave their urns at time T and ST is the number of balls remaining there at that time. Efficient recursion formulas are derived. Asymptotics and continuous time approximations are considered. For k = ∞, T is the maximum of n geometrically distributed variables. This case has earlier got a large literature.
  •  
5.
  • Fries, Niklas, 1991- (författare)
  • Data-driven quality management using explainable machine learning and adaptive control limits
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In industrial applications, the objective of statistical quality management is to achieve quality guarantees through the efficient and effective application of statistical methods. Historically, quality management has been characterized by a systematic monitoring of critical quality characteristics, accompanied by manual and experience-based root cause analysis in case of an observed decline in quality. Machine learning researchers have suggested that recent improvements in digitization, including sensor technology, computational power, and algorithmic developments, should enable more systematic approaches to root cause analysis.In this thesis, we explore the potential of data-driven approaches to quality management. This exploration is performed with consideration to an envisioned end product which consists of an automated data collection and curation system, a predictive and explanatory model trained on historical process and quality data, and an automated alarm system that predicts a decline in quality and suggests worthwhile interventions. The research questions investigated in this thesis relate to which statistical methods are relevant for the implementation of the product, how their reliability can be assessed, and whether there are knowledge gaps that prevent this implementation.This thesis consists of four papers: In Paper I, we simulated various types of process-like data in order to investigate how several dataset properties affect the choice of methods for quality prediction. These properties include the number of predictors, their distribution and correlation structure, and their relationships with the response. In Paper II, we reused the simulation method from Paper I to simulate multiple types of datasets, and used them to compare local explanation methods by evaluating them against a ground truth.In Paper III, we outlined a framework for an automated process adjustment system based on a predictive and explanatory model trained on historical data. Next, given a relative cost between reduced quality and process adjustments, we described a method for searching for a worthwhile adjustment policy. Several simulation experiments were performed to demonstrate how to evaluate such a policy.In Paper IV, we described three ways to evaluate local explanation methods on real-world data, where no ground truth is available for comparison. Additionally, we described four methods for decorrelation and dimension reduction, and describe the respective tradeoffs. These methods were evaluated on real-world process and quality data from the paint shop of the Volvo Trucks cab factory in Umeå, Sweden.During the work on this thesis, two significant knowledge gaps were identified: The first gap is a lack of best practices for data collection and quality control, preprocessing, and model selection. The other gap is that although there are many promising leads for how to explain the predictions of machine learning models, there is still an absence of generally accepted definitions for what constitutes an explanation, and a lack of methods for evaluating the reliability of such explanations.
  •  
6.
  • Kellgren, Therese, 1983- (författare)
  • Hidden patterns that matter : statistical methods for analysis of DNA and RNA data
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Understanding how the genetic variations can affect characteristics and function of organisms can help researchers and medical doctors to detect genetic alterations that cause disease and reveal genes that causes antibiotic resistance. The opportunities and progress associated with such data come however with challenges related to statistical analysis. It is only by using properly designed and employed tools, that we can extract the information about hidden patterns. In this thesis we present three types of such analysis. First, the genetic variant in the gene COL17A1 that causes corneal dystrophy with recurrent erosions is reveled. By studying Next-generation sequencing data, the order of the nucleotides in the DNAsequence was be obtained, which enabled us to detect interesting variants in the genome. Further, we present results of an experimental design study with the aim to make the best selection from a family that is affected by an inherited disease. In second part of the work, we analyzed a novel antibiotic resistance Staphylococcus epidermidis clone that is only found in northern Europe. By investigating its genetic data, we revealed similarities to a world known antibiotic resistance clone. As a result, the antibiotic resistance profile is established from the DNA sequences. Finally, we also focus on the challenges related to the abundance of genetic data from different sources. The increasing number of public gene expression datasets gives us opportunity to increase our understanding by using information from multiple sources simultaneously. Naturally, this requires merging independent datasets together. However, when doing so, the technical and biological variation in the joined data increases. We present a pre-processing method to construct gene co-expression networks from a large diverse gene-expression dataset.
  •  
7.
  • Larson, Kajsa (författare)
  • Estimation of the passage time distribution on a graph via the EM algorithm
  • 2010
  • Rapport (populärvet., debatt m.m.)abstract
    • e propose EM algorithms to  estimate the passage time distribution on a graph.  Data is obtained by observing a flow only at the nodes -- what happens on the edges is unknown. Therefore the sample of passage times, i.e. the times it takes for the flow to stream between two neighbors, consists of right censored and uncensored observations where it sometimes is unknown which is which. For discrete passage time distributions, we show that the maximum likelihood (ML) estimate is strongly consistent under certain  weak conditions. We also show that the EM algorithm  converges to the ML estimate if the sample size is sufficiently large and the starting value is sufficiently close to the true parameter. In a special case we show that it always converges.  In the continuous case, we propose an EM algorithm for fitting  phase-type distributions to data.
  •  
8.
  •  
9.
  • Lundquist, Anders, 1978-, et al. (författare)
  • On sampling with desired inclusion probabilities of first and second order
  • 2005
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We present a new simple approximation of target probabilities pi for conditional Poisson sampling to obtain given inclusion probabilities. This approximation is based on the fact that the Sampford design gives inclusion probabilities as desired. Some alternative routines to calculate exact pi-values are presented and compared numerically. Further we derive two methods for achieving prescribed 2nd order inclusion probabilities. First we use a probability function belonging to the exponential family. The parameters of this probability function are determined by using an iterative proportional fitting algorithm. Then we modify the conditional Poisson probability function with an additional quadratic factor.
  •  
10.
  • Rydén, Patrik (författare)
  • Non-Parametric Estimators Related to Local Load-Sharing Models
  • 1999
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We consider the problem of estimating the cumulative distribution function of failure stresses of bundles (i.e. the tensile forces that destroy bundles), constructed of several statistically similar fibres, given a particu-lar kind of censored data. Each bundle consists of several fibres which have their own independent identically distributed failure stresses, and where the force applied on a bundle at any moment is distributed between the fibres in the bundle according to the local load-sharing model. The testing of several bundles generates a special kind of censored data, which is complexly structured. Consistent non-parametric estima-tors of the distribution laws of bundles are obtained by applying the theory of martingales, and by using the observed data. It is showed that ran-dom sampling, with replacement from the statistical data related to each tested bundle, can be used to estimate the accuracy of our non-parametric estimators. Numerical examples illustrate the behavior of the obtained es-timators.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 16

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy