SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:kth-246235"
 

Search: onr:"swepub:oai:DiVA.org:kth-246235" > Efficient Venn pred...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist
  • Johansson, UlfJönköping University,Jönköping AI Lab (JAIL) (author)

Efficient Venn predictors using random forests

  • Article/chapterEnglish2019

Publisher, publication year, extent ...

  • 2018-08-20
  • SPRINGER,2019
  • printrdacarrier

Numbers

  • LIBRIS-ID:oai:DiVA.org:kth-246235
  • https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246235URI
  • https://doi.org/10.1007/s10994-018-5753-xDOI
  • https://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-41127URI

Supplementary language notes

  • Language:English
  • Summary in:English

Part of subdatabase

Classification

  • Subject category:ref swepub-contenttype
  • Subject category:art swepub-publicationtype

Notes

  • QC 20190403
  • Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. In addition, a probabilistic classifier must, of course, also be as accurate as possible. In this paper, Venn predictors, and its special case Venn-Abers predictors, are evaluated for probabilistic classification, using random forests as the underlying models. Venn predictors output multiple probabilities for each label, i.e., the predicted label is associated with a probability interval. Since all Venn predictors are valid in the long run, the size of the probability intervals is very important, with tighter intervals being more informative. The standard solution when calibrating a classifier is to employ an additional step, transforming the outputs from a classifier into probability estimates, using a labeled data set not employed for training of the models. For random forests, and other bagged ensembles, it is, however, possible to use the out-of-bag instances for calibration, making all training data available for both model learning and calibration. This procedure has previously been successfully applied to conformal prediction, but was here evaluated for the first time for Venn predictors. The empirical investigation, using 22 publicly available data sets, showed that all four versions of the Venn predictors were better calibrated than both the raw estimates from the random forest, and the standard techniques Platt scaling and isotonic regression. Regarding both informativeness and accuracy, the standard Venn predictor calibrated on out-of-bag instances was the best setup evaluated. Most importantly, calibrating on out-of-bag instances, instead of using a separate calibration set, resulted in tighter intervals and more accurate models on every data set, for both the Venn predictors and the Venn-Abers predictors.

Subject headings and genre

Added entries (persons, corporate bodies, meetings, titles ...)

  • Löfström, Tuwe,1977-Jönköping University,Jönköping AI Lab (JAIL)(Swepub:hj)loftuw (author)
  • Linusson, HenrikUniv Boras, Dept Informat Technol, Boras, Sweden.,Högskolan i Borås, Department of Information Technology, Borås, Sweden (author)
  • Boström, HenrikKTH,Programvaruteknik och datorsystem, SCS,The Royal Institute of Technology (KTH), School of Electrical Engineering and Computer Science, Stockholm, Sweden(Swepub:kth)u1r0rr47 (author)
  • Jönköping UniversityJönköping AI Lab (JAIL) (creator_code:org_t)

Related titles

  • In:Machine Learning: SPRINGER108:3, s. 535-5500885-61251573-0565

Internet link

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view