SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Annan data och informationsvetenskap) ;pers:(Ohlsson Mattias)"

Search: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Annan data och informationsvetenskap) > Ohlsson Mattias

  • Result 1-10 of 11
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Abiri, Najmeh, et al. (author)
  • Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems
  • 2019
  • In: Neurocomputing. - Amsterdam : Elsevier BV. - 0925-2312 .- 1872-8286. ; 365, s. 137-146
  • Journal article (peer-reviewed)abstract
    • Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption.
  •  
2.
  • Abiri, Najmeh, et al. (author)
  • Variational auto-encoders with Student’s t-prior
  • 2019
  • In: ESANN 2019 - Proceedings : The 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning - The 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. - Bruges : ESANN. - 9782875870650
  • Conference paper (peer-reviewed)abstract
    • We propose a new structure for the variational auto-encoders (VAEs) prior, with the weakly informative multivariate Student’s t-distribution. In the proposed model all distribution parameters are trained, thereby allowing for a more robust approximation of the underlying data distribution. We used Fashion-MNIST data in two experiments to compare the proposed VAEs with the standard Gaussian priors. Both experiments showed a better reconstruction of the images with VAEs using Student’s t-prior distribution.
  •  
3.
  • Hall, Ola, et al. (author)
  • A review of explainable AI in the satellite data, deep machine learning, and human poverty domain
  • 2022
  • In: Patterns. - Cambridge : Cell Press. - 2666-3899. ; 3:10
  • Research review (peer-reviewed)abstract
    • Recent advances in artificial intelligence and deep machine learning have created a step change in how to measure human development indicators, in particular asset-based poverty. The combination of satellite imagery and deep machine learning now has the capability to estimate some types of poverty at a level close to what is achieved with traditional household surveys. An increasingly important issue beyond static estimations is whether this technology can contribute to scientific discovery and, consequently, new knowledge in the poverty and welfare domain. A foundation for achieving scientific insights is domain knowledge, which in turn translates into explainability and scientific consistency. We perform an integrative literature review focusing on three core elements relevant in this context—transparency, interpretability, and explainability—and investigate how they relate to the poverty, machine learning, and satellite imagery nexus. Our inclusion criteria for papers are that they cover poverty/wealth prediction, using survey data as the basis for the ground truth poverty/wealth estimates, be applicable to both urban and rural settings, use satellite images as the basis for at least some of the inputs (features), and the method should include deep neural networks. Our review of 32 papers shows that the status of the three core elements of explainable machine learning (transparency, interpretability, and domain knowledge) is varied and does not completely fulfill the requirements set up for scientific insights and discoveries. We argue that explainability is essential to support wider dissemination and acceptance of this research in the development community and that explainability means more than just interpretability. (c) 2022 The Author(s). 
  •  
4.
  • Björkelund, Anders, et al. (author)
  • Machine learning compared with rule‐in/rule‐out algorithms and logistic regression to predict acute myocardial infarction based on troponin T concentrations
  • 2021
  • In: Journal of the American College of Emergency Physicians Open. - Hoboken, NJ : John Wiley & Sons. - 2688-1152. ; 2:2
  • Journal article (peer-reviewed)abstract
    • AbstractObjectiveComputerized decision-support tools may improve diagnosis of acute myocardial infarction (AMI) among patients presenting with chest pain at the emergency department (ED). The primary aim was to assess the predictive accuracy of machine learning algorithms based on paired high-sensitivity cardiac troponin T (hs-cTnT) concentrations with varying sampling times, age, and sex in order to rule in or out AMI.MethodsIn this register-based, cross-sectional diagnostic study conducted retrospectively based on 5695 chest pain patients at 2 hospitals in Sweden 2013–2014 we used 5-fold cross-validation 200 times in order to compare the performance of an artificial neural network (ANN) with European guideline-recommended 0/1- and 0/3-hour algorithms for hs-cTnT and with logistic regression without interaction terms. Primary outcome was the size of the intermediate risk group where AMI could not be ruled in or out, while holding the sensitivity (rule-out) and specificity (rule-in) constant across models.ResultsANN and logistic regression had similar (95%) areas under the receiver operating characteristics curve. In patients (n = 4171) where the timing requirements (0/1 or 0/3 hour) for the sampling were met, using ANN led to a relative decrease of 9.2% (95% confidence interval 4.4% to 13.8%; from 24.5% to 22.2% of all tested patients) in the size of the intermediate group compared to the recommended algorithms. By contrast, using logistic regression did not substantially decrease the size of the intermediate group.ConclusionMachine learning algorithms allow for flexibility in sampling and have the potential to improve risk assessment among chest pain patients at the ED.
  •  
5.
  • Gummeson, Anna, et al. (author)
  • Automatic Gleason grading of H&E stained microscopic prostate images using deep convolutional neural networks
  • 2017
  • In: Medical Imaging 2017: Digital Pathology. - : SPIE. - 9781510607255 ; 10140
  • Conference paper (peer-reviewed)abstract
    • Prostate cancer is the most diagnosed cancer in men. The diagnosis is confirmed by pathologists based on ocular inspection of prostate biopsies in order to classify them according to Gleason score. The main goal of this paper is to automate the classification using convolutional neural networks (CNNs). The introduction of CNNs has broadened the field of pattern recognition. It replaces the classical way of designing and extracting hand-made features used for classification with the substantially different strategy of letting the computer itself decide which features are of importance. For automated prostate cancer classification into the classes: Benign, Gleason grade 3, 4 and 5 we propose a CNN with small convolutional filters that has been trained from scratch using stochastic gradient descent with momentum. The input consists of microscopic images of haematoxylin and eosin stained tissue, the output is a coarse segmentation into regions of the four different classes. The dataset used consists of 213 images, each considered to be of one class only. Using four-fold cross-validation we obtained an error rate of 7.3%, which is significantly better than previous state of the art using the same dataset. Although the dataset was rather small, good results were obtained. From this we conclude that CNN is a promising method for this problem. Future work includes obtaining a larger dataset, which potentially could diminish the error margin.
  •  
6.
  • Kalderstam, Jonas, et al. (author)
  • Ensembles of genetically trained artificial neural networks for survival analysis
  • 2013
  • In: ESANN 2013 proceedings, 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. - 9782874190810 ; , s. 333-338
  • Conference paper (peer-reviewed)abstract
    • We have developed a prognostic index model for survival data based on an ensemble of artificial neural networks that optimizes directly on the concordance index. Approximations of the c-index are avoided with the use of a genetic algorithm, which does not require gradient information. The model is compared with Cox proportional hazards (COX) and three support vector machine (SVM) models by Van Belle et al. [10] on two clinical data sets, and only with COX on one artificial data set. Results indicate comparable performance to COX and SVM models on clinical data and superior performance compared to COX on non-linear data.
  •  
7.
  • Medved, Dennis, et al. (author)
  • Improving prediction of heart transplantation outcome using deep learning techniques
  • 2018
  • In: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322. ; :8
  • Journal article (peer-reviewed)abstract
    • The primary objective of this study is to compare the accuracy of two risk models, International Heart Transplantation Survival Algorithm (IHTSA), developed using deep learning technique, and Index for Mortality Prediction After Cardiac Transplantation (IMPACT), to predict survival after heart transplantation. Data from adult heart transplanted patients between January 1997 to December 2011 were collected from the UNOS registry. The study included 27,860 heart transplantations, corresponding to 27,705 patients. The study cohorts were divided into patients transplanted before 2009 (derivation cohort) and from 2009 (test cohort). The receiver operating characteristic (ROC) values, for the validation cohort, computed for one-year mortality, were 0.654 (95% CI: 0.629–0.679) for IHTSA and 0.608 (0.583–0.634) for the IMPACT model. The discrimination reached a C-index for long-term survival of 0.627 (0.608–0.646) for IHTSA, compared with 0.584 (0.564–0.605) for the IMPACT model. These figures correspond to an error reduction of 12% for ROC and 10% for C-index by using deep learning technique. The predicted one-year mortality rates for were 12% and 22% for IHTSA and IMPACT, respectively, versus an actual mortality rate of 10%. The IHTSA model showed superior discriminatory power to predict one-year mortality and survival over time after heart transplantation compared to the IMPACT model.
  •  
8.
  • Ohlsson, Mattias, et al. (author)
  • A study of the mean field approach to knapsack problems
  • 1997
  • In: Neural Networks. - 0893-6080. ; 10:2, s. 263-271
  • Journal article (peer-reviewed)abstract
    • The mean field theory approach to knapsack problems is extended to multiple knapsacks and generalized assignment problems with Potts mean field equations governing the dynamics. Numerical tests against 'state of the art' conventional algorithms shows good performance for the mean field approach. The inherently parallelism of the mean field equations makes them suitable for direct implementations in microchips. It is demonstrated numerically that the performance is essentially not affected when only a limited number of bits is used in the mean field equations. Also, a hybrid algorithm with linear programming and mean field components is showed to further improve the performance for the difficult homogeneous N x M knapsack problem.
  •  
9.
  • Ohlsson, Mattias (author)
  • Extensions and explorations of the elastic arms algorithm
  • 1993
  • In: Computer Physics Communications. - : Elsevier BV. - 0010-4655. ; 77:1, s. 19-32
  • Journal article (peer-reviewed)abstract
    • The deformable templates method for track finding in high energy physics is reviewed and extended to handle multiple and secondary vertex positions. An automatized minimization method that handles different types of parametrizations is derived. It is based on the gradient descent method but modified with an explicit calculation of the natural metric. Also a simplified and more intuitive derivation of the algorithm using Potts mean field theory equations is given.
  •  
10.
  • Ohlsson, Mattias, et al. (author)
  • Neural Networks for Optimization Problems with Inequality Constraints: The Knapsack Problem
  • 1993
  • In: Neural Computation. - : MIT Press - Journals. - 0899-7667 .- 1530-888X. ; 5:2, s. 331-339
  • Journal article (peer-reviewed)abstract
    • A strategy for finding approximate solutions to discrete optimization problems with inequality constraints using mean field neural networks is presented. The constraints x ≤ 0 are encoded by x⊖(x) terms in the energy function. A careful treatment of the mean field approximation for the self-coupling parts of the energy is crucial, and results in an essentially parameter-free algorithm. This methodology is extensively tested on the knapsack problem of size up to 103 items. The algorithm scales like NM for problems with N items and M constraints. Comparisons are made with an exact branch and bound algorithm when this is computationally possible (N ≤ 30). The quality of the neural network solutions consistently lies above 95% of the optimal ones at a significantly lower CPU expense. For the larger problem sizes the algorithm is compared with simulated annealing and a modified linear programming approach. For "nonhomogeneous" problems these produce good solutions, whereas for the more difficult "homogeneous" problems the neural approach is a winner with respect to solution quality and/or CPU time consumption. The approach is of course also applicable to other problems of similar structure, like set covering.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view