SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WAKA:kon ;lar1:(hb);pers:(Boström Henrik)"

Sökning: WAKA:kon > Högskolan i Borås > Boström Henrik

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Jansson, Karl, et al. (författare)
  • gpuRF and gpuERT : Efficient and Scalable GPU Algorithms for Decision Tree Ensembles
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • We present two new parallel implementations of the ensemble learning methods Random Forests (RF) and Extremely Randomized Trees (ERT), called gpuRF and gpuERT, for emerging many-core platforms, e.g., contemporary graphics cards suitable for general-purpose computing (GPGPU). RF and ERT are two ensemble methods for generating predictive models that are of high importance within machine learning. They operate by constructing a multitude of decision trees at training time and outputting a prediction by comparing the outputs of the individual trees. Thanks to the inherent parallelism of the task, an obvious platform for its computation is to employ contemporary GPUs with a large number of processing cores. Previous parallel algorithms for RF in the literature are either designed for traditional multi-core CPU platforms or early history GPUs with simpler architecture and relatively few cores. For ERT, only briefly sketched parallelization attempts exist in the literature. The new parallel algorithms are designed for contemporary GPUs with a large number of cores and take into account aspects of the newer hardware architectures, such as memory hierarchy and thread scheduling. They are implemented using the C/C++ language and the CUDA interface to attain the best possible performance on NVidia-based GPUs. An experimental study comparing the most important previous solutions for CPU and GPU platforms to the novel implementations shows significant advantages in the aspect of efficiency for the latter, often with several orders of magnitude.
  •  
2.
  • Jansson, Karl, et al. (författare)
  • Parallel tree-ensemble algorithms for GPUs using CUDA
  • 2013
  • Konferensbidrag (refereegranskat)abstract
    • We present two new parallel implementations of the tree-ensemble algorithms Random Forest (RF) and Extremely randomized trees (ERT) for emerging many-core platforms, e.g., contemporary graphics cards suitable for general-purpose computing (GPGPU). Random Forest and Extremely randomized trees are ensemble learners for classification and regression. They operate by constructing a multitude of decision trees at training time and outputting a prediction by comparing the outputs of the individual trees. Thanks to the inherent parallelism of the task, an obvious platform for its computation is to employ contemporary GPUs with a large number of processing cores. Previous parallel algorithms for Random Forests in the literature are either designed for traditional multi-core CPU platforms or early history GPUs with simpler hardware architecture and relatively few number of cores. The new parallel algorithms are designed for contemporary GPUs with a large number of cores and take into account aspects of the newer hardware architectures as memory hierarchy and thread scheduling. They are implemented using the C/C++ language and the CUDA interface for best possible performance on NVidia-based GPUs. An experimental study comparing with the most important previous solutions for CPU and GPU platforms shows significant improvement for the new implementations, often with several magnitudes.
  •  
3.
  • Johansson, Ulf, et al. (författare)
  • Venn predictors for well-calibrated probability estimation trees
  • 2018
  • Ingår i: 7th Symposium on Conformal and Probabilistic Prediction and Applications. ; , s. 3-14, s. 3-14
  • Konferensbidrag (refereegranskat)abstract
    • Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. The standard solution is to employ an additional step, transforming the outputs from a classifier into probability estimates. In this paper, Venn predictors are compared to Platt scaling and isotonic regression, for the purpose of producing well-calibrated probabilistic predictions from decision trees. The empirical investigation, using 22 publicly available datasets, showed that the probability estimates from the Venn predictor were extremely well-calibrated. In fact, in a direct comparison using the accepted reliability metric, the Venn predictor estimates were the most exact on every data set.
  •  
4.
  • Löfström, Tuve, et al. (författare)
  • Effective Utilization of Data in Inductive Conformal Prediction
  • 2013
  • Ingår i: Proceedings of the International Joint Conference on Neural Networks 2013. - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • Conformal prediction is a new framework producing region predictions with a guaranteed error rate. Inductive conformal prediction (ICP) was designed to significantly reduce the computational cost associated with the original transductive online approach. The drawback of inductive conformal prediction is that it is not possible to use all data for training, since it sets aside some data as a separate calibration set. Recently, cross-conformal prediction (CCP) and bootstrap conformal prediction (BCP) were proposed to overcome that drawback of inductive conformal prediction. Unfortunately, CCP and BCP both need to build several models for the calibration, making them less attractive. In this study, focusing on bagged neural network ensembles as conformal predictors, ICP, CCP and BCP are compared to the very straightforward and cost-effective method of using the out-of-bag estimates for the necessary calibration. Experiments on 34 publicly available data sets conclusively show that the use of out-of-bag estimates produced the most efficient conformal predictors, making it the obvious preferred choice for ensembles in the conformal prediction framework.
  •  
5.
  • Sönströd, Cecilia, et al. (författare)
  • Pin-Pointing Concept Descriptions
  • 2010
  • Konferensbidrag (refereegranskat)abstract
    • In this study, the task of obtaining accurate and comprehensible concept descriptions of a specific set of production instances has been investigated. The suggested method, inspired by rule extraction and transductive learning, uses a highly accurate opaque model, called an oracle, to coach construction of transparent decision list models. The decision list algorithms evaluated are JRip and four different variants of Chipper, a technique specifically developed for concept description. Using 40 real-world data sets from the drug discovery domain, the results show that employing an oracle coach to label the production data resulted in significantly more accurate and smaller models for almost all techniques. Furthermore, augmenting normal training data with production data labeled by the oracle also led to significant increases in predictive performance, but with a slight increase in model size. Of the techniques evaluated, normal Chipper optimizing FOIL’s information gain and allowing conjunctive rules was clearly the best. The overall conclusion is that oracle coaching works very well for concept description.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy