SwePub
Sök i SwePub databas

  Utökad sökning

Booleska operatorer måste skrivas med VERSALER

Träfflista för sökning "AMNE:(NATURAL SCIENCES Computer and Information Sciences) ;pers:(Boström Henrik)"

Sökning: AMNE:(NATURAL SCIENCES Computer and Information Sciences) > Boström Henrik

  • Resultat 1-10 av 173
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Johansson, Ulf, et al. (författare)
  • Overproduce-and-Select : The Grim Reality
  • 2013
  • Ingår i: 2013 IEEE Symposium on Computational Intelligence and Ensemble Learning (CIEL). - : IEEE. - 9781467358538 ; , s. 52-59
  • Konferensbidrag (refereegranskat)abstract
    • Overproduce-and-select (OPAS) is a frequently used paradigm for building ensembles. In static OPAS, a large number of base classifiers are trained, before a subset of the available models is selected to be combined into the final ensemble. In general, the selected classifiers are supposed to be accurate and diverse for the OPAS strategy to result in highly accurate ensembles, but exactly how this is enforced in the selection process is not obvious. Most often, either individual models or ensembles are evaluated, using some performance metric, on available and labeled data. Naturally, the underlying assumption is that an observed advantage for the models (or the resulting ensemble) will carry over to test data. In the experimental study, a typical static OPAS scenario, using a pool of artificial neural networks and a number of very natural and frequently used performance measures, is evaluated on 22 publicly available data sets. The discouraging result is that although a fairly large proportion of the ensembles obtained higher test set accuracies, compared to using the entire pool as the ensemble, none of the selection criteria could be used to identify these highly accurate ensembles. Despite only investigating a specific scenario, we argue that the settings used are typical for static OPAS, thus making the results general enough to question the entire paradigm.
  •  
2.
  • Linnusson, Henrik, et al. (författare)
  • Efficient conformal predictor ensembles
  • 2020
  • Ingår i: Neurocomputing. - : Elsevier BV. - 0925-2312 .- 1872-8286. ; 397, s. 266-278
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we study a generalization of a recently developed strategy for generating conformal predictor ensembles: out-of-bag calibration. The ensemble strategy is evaluated, both theoretically and empirically, against a commonly used alternative ensemble strategy, bootstrap conformal prediction, as well as common non-ensemble strategies. A thorough analysis is provided of out-of-bag calibration, with respect to theoretical validity, empirical validity (error rate), efficiency (prediction region size) and p-value stability (the degree of variance observed over multiple predictions for the same object). Empirical results show that out-of-bag calibration displays favorable characteristics with regard to these criteria, and we propose that out-of-bag calibration be adopted as a standard method for constructing conformal predictor ensembles.
  •  
3.
  • Johansson, Ulf, et al. (författare)
  • Conformal Prediction Using Decision Trees
  • 2013
  • Ingår i: IEEE 13th International Conference on Data Mining (ICDM). - : IEEE Computer Society. - 9780769551081 ; , s. 330-339
  • Konferensbidrag (refereegranskat)abstract
    • Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.
  •  
4.
  • Johansson, Ulf, et al. (författare)
  • Evolved Decision Trees as Conformal Predictors
  • 2013
  • Ingår i: 2013 IEEE Congress on Evolutionary Computation (CEC). - : IEEE. - 9781479904532 ; , s. 1794-1801
  • Konferensbidrag (refereegranskat)abstract
    • In conformal prediction, predictive models output sets of predictions with a bound on the error rate. In classification, this translates to that the probability of excluding the correct class is lower than a predefined significance level, in the long run. Since the error rate is guaranteed, the most important criterion for conformal predictors is efficiency. Efficient conformal predictors minimize the number of elements in the output prediction sets, thus producing more informative predictions. This paper presents one of the first comprehensive studies where evolutionary algorithms are used to build conformal predictors. More specifically, decision trees evolved using genetic programming are evaluated as conformal predictors. In the experiments, the evolved trees are compared to decision trees induced using standard machine learning techniques on 33 publicly available benchmark data sets, with regard to predictive performance and efficiency. The results show that the evolved trees are generally more accurate, and the corresponding conformal predictors more efficient, than their induced counterparts. One important result is that the probability estimates of decision trees when used as conformal predictors should be smoothed, here using the Laplace correction. Finally, using the more discriminating Brier score instead of accuracy as the optimization criterion produced the most efficient conformal predictions.
  •  
5.
  • Johansson, Ulf, et al. (författare)
  • Random Brains
  • 2013
  • Ingår i: The 2013 International Joint Conference on Neural Networks (IJCNN). - : IEEE. - 9781467361286 ; , s. 1-8
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we introduce and evaluate a novel method, called random brains, for producing neural network ensembles. The suggested method, which is heavily inspired by the random forest technique, produces diversity implicitly by using bootstrap training and randomized architectures. More specifically, for each base classifier multilayer perceptron, a number of randomly selected links between the input layer and the hidden layer are removed prior to training, thus resulting in potentially weaker but more diverse base classifiers. The experimental results on 20 UCI data sets show that random brains obtained significantly higher accuracy and AUC, compared to standard bagging of similar neural networks not utilizing randomized architectures. The analysis shows that the main reason for the increased ensemble performance is the ability to produce effective diversity, as indicated by the increase in the difficulty diversity measure.
  •  
6.
  • Johansson, Ulf, et al. (författare)
  • Rule Extraction with Guaranteed Fidelity
  • 2014
  • Ingår i: Artificial Intelligence Applications and Innovations. - Cham : Springer. - 9783662447215 - 9783662447222 ; , s. 281-290
  • Konferensbidrag (refereegranskat)abstract
    • This paper extends the conformal prediction framework to rule extraction, making it possible to extract interpretable models from opaque models in a setting where either the infidelity or the error rate is bounded by a predefined significance level. Experimental results on 27 publicly available data sets show that all three setups evaluated produced valid and rather efficient conformal predictors. The implication is that augmenting rule extraction with conformal prediction allows extraction of models where test set errors or test sets infidelities are guaranteed to be lower than a chosen acceptable level. Clearly this is beneficial for both typical rule extraction scenarios, i.e., either when the purpose is to explain an existing opaque model, or when it is to build a predictive model that must be interpretable.
  •  
7.
  • Linusson, Henrik, et al. (författare)
  • Efficiency Comparison of Unstable Transductive and Inductive Conformal Classifiers
  • 2014
  • Ingår i: Artificial Intelligence Applications and Innovations. - Cham : Springer. - 9783662447215 ; , s. 261-270
  • Konferensbidrag (refereegranskat)abstract
    • In the conformal prediction literature, it appears axiomatic that transductive conformal classifiers possess a higher predictive efficiency than inductive conformal classifiers, however, this depends on whether or not the nonconformity function tends to overfit misclassified test examples. With the conformal prediction framework’s increasing popularity, it thus becomes necessary to clarify the settings in which this claim holds true. In this paper, the efficiency of transductive conformal classifiers based on decision tree, random forest and support vector machine classification models is compared to the efficiency of corresponding inductive conformal classifiers. The results show that the efficiency of conformal classifiers based on standard decision trees or random forests is substantially improved when used in the inductive mode, while conformal classifiers based on support vector machines are more efficient in the transductive mode. In addition, an analysis is presented that discusses the effects of calibration set size on inductive conformal classifier efficiency.
  •  
8.
  • Löfström, Tuve, et al. (författare)
  • Effective Utilization of Data in Inductive Conformal Prediction using Ensembles of Neural Networks
  • 2013
  • Ingår i: The 2013 International Joint Conference on Neural Networks (IJCNN). - : IEEE. - 9781467361286 ; , s. 1-8
  • Konferensbidrag (refereegranskat)abstract
    • Conformal prediction is a new framework producing region predictions with a guaranteed error rate. Inductive conformal prediction (ICP) was designed to significantly reduce the computational cost associated with the original transductive online approach. The drawback of inductive conformal prediction is that it is not possible to use all data for training, since it sets aside some data as a separate calibration set. Recently, cross-conformal prediction (CCP) and bootstrap conformal prediction (BCP) were proposed to overcome that drawback of inductive conformal prediction. Unfortunately, CCP and BCP both need to build several models for the calibration, making them less attractive. In this study, focusing on bagged neural network ensembles as conformal predictors, ICP, CCP and BCP are compared to the very straightforward and cost-effective method of using the out-of-bag estimates for the necessary calibration. Experiments on 34 publicly available data sets conclusively show that the use of out-of-bag estimates produced the most efficient conformal predictors, making it the obvious preferred choice for ensembles in the conformal prediction framework.
  •  
9.
  • Johansson, Ulf, et al. (författare)
  • Extending Nearest Neighbor Classification with Spheres of Confidence
  • 2008
  • Ingår i: Proceedings of the Twenty-First International FLAIRS Conference (FLAIRS 2008). - : AAAI Press. - 9781577353652 ; , s. 282-287
  • Konferensbidrag (refereegranskat)abstract
    • The standard kNN algorithm suffers from two major drawbacks: sensitivity to the parameter value k, i.e., the number of neighbors, and the use of k as a global constant that is independent of the particular region in which theexample to be classified falls. Methods using weighted voting schemes only partly alleviate these problems, since they still involve choosing a fixed k. In this paper, a novel instance-based learner is introduced that does not require kas a parameter, but instead employs a flexible strategy for determining the number of neighbors to consider for the specific example to be classified, hence using a local instead of global k. A number of variants of the algorithm are evaluated on 18 datasets from the UCI repository. The novel algorithm in its basic form is shown to significantly outperform standard kNN with respect to accuracy, and an adapted version of the algorithm is shown to be clearlyahead with respect to the area under ROC curve. Similar to standard kNN, the novel algorithm still allows for various extensions, such as weighted voting and axes scaling.
  •  
10.
  • Löfström, Tuve, et al. (författare)
  • The Problem with Ranking Ensembles Based on Training or Validation Performance
  • 2008
  • Ingår i: Proceedings of the International Joint Conference on Neural Networks. - : IEEE. - 9781424418213 - 9781424418206
  • Konferensbidrag (refereegranskat)abstract
    • The main purpose of this study was to determine whether it is possible to somehow use results on training or validation data to estimate ensemble performance on novel data. With the specific setup evaluated; i.e. using ensembles built from a pool of independently trained neural networks and targeting diversity only implicitly, the answer is a resounding no. Experimentation, using 13 UCI datasets, shows that there is in general nothing to gain in performance on novel data by choosing an ensemble based on any of the training measures evaluated here. This is despite the fact that the measures evaluated include all the most frequently used; i.e. ensemble training and validation accuracy, base classifier training and validation accuracy, ensemble training and validation AUC and two diversity measures. The main reason is that all ensembles tend to have quite similar performance, unless we deliberately lower the accuracy of the base classifiers. The key consequence is, of course, that a data miner can do no better than picking an ensemble at random. In addition, the results indicate that it is futile to look for an algorithm aimed at optimizing ensemble performance by somehow selecting a subset of available base classifiers.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 173
Typ av publikation
konferensbidrag (120)
tidskriftsartikel (34)
doktorsavhandling (10)
proceedings (redaktörskap) (2)
bokkapitel (2)
licentiatavhandling (2)
visa fler...
konstnärligt arbete (1)
rapport (1)
annan publikation (1)
patent (1)
visa färre...
Typ av innehåll
refereegranskat (151)
övrigt vetenskapligt/konstnärligt (21)
populärvet., debatt m.m. (1)
Författare/redaktör
Johansson, Ulf (52)
Löfström, Tuve (21)
Asker, Lars (21)
Papapetrou, Panagiot ... (19)
Zhao, Jing (15)
visa fler...
Linusson, Henrik (14)
Karlsson, Isak (14)
Norinder, Ulf (13)
Löfström, Tuwe, 1977 ... (13)
Sönströd, Cecilia (13)
Henriksson, Aron (8)
Deegalla, Sampath (8)
Boström, Henrik, Pro ... (8)
Karunaratne, Thashme ... (6)
Carlsson, Lars (6)
Lindgren, Tony (6)
Dudas, Catarina (6)
Ennadir, Sofiane (5)
Vazirgiannis, Michal ... (5)
Dalianis, Hercules (5)
Alkhatib, Amr (5)
Gurung, Ram B. (5)
Ahlberg, Ernst (4)
Norinder, Ulf, 1956- (4)
Sundell, Håkan (4)
Henelius, Andreas (4)
Johansson, U (3)
Ng, Amos H. C., 1970 ... (3)
Karlsson, Alexander (3)
Johansson, Ronnie (3)
König, Rikard (3)
Walgama, Keerthi (3)
Sotomane, Constantin ... (3)
Jansson, Karl (3)
Vasiloudis, Theodore (3)
Puolamäki, Kai (3)
Abbahaddou, Yassine (2)
Lutzeyer, Johannes F ... (2)
Karunaratne, Thashme ... (2)
Ng, A (2)
Ng, Amos (2)
Löfström, Tuwe (2)
Vesterberg, Anders (2)
Dudas, C. (2)
Lidén, Per (2)
Sönströd, C. (2)
Löfström, T. (2)
Löfström, Tuve, 1977 ... (2)
Linnusson, Henrik (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (145)
Stockholms universitet (106)
Jönköping University (43)
Högskolan i Borås (30)
Högskolan i Skövde (21)
Örebro universitet (4)
visa fler...
Uppsala universitet (3)
Malmö universitet (1)
Mittuniversitetet (1)
Linnéuniversitetet (1)
visa färre...
Språk
Engelska (173)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (173)
Teknik (3)
Samhällsvetenskap (3)
Medicin och hälsovetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy