SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "LAR1:hb ;lar1:(his);pers:(Boström Henrik)"

Sökning: LAR1:hb > Högskolan i Skövde > Boström Henrik

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Johansson, Ulf, et al. (författare)
  • Chipper : A Novel Algorithm for Concept Description
  • 2008
  • Ingår i: Frontiers in Artificial Intelligence and Applications. - : IOS Press. - 9781586038670 ; , s. 133-140
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, several demands placed on concept description algorithms are identified and discussed. The most important criterion is the ability to produce compact rule sets that, in a natural and accurate way, describe the most important relationships in the underlying domain. An algorithm based on the identified criteria is presented and evaluated. The algorithm, named Chipper, produces decision lists, where each rule covers a maximum number of remaining instances while meeting requested accuracy requirements. In the experiments, Chipper is evaluated on nine UCI data sets. The main result is that Chipper produces compact and understandable rule sets, clearly fulfilling the overall goal of concept description. In the experiments, Chipper's accuracy is similar to standard decision tree and rule induction algorithms, while rule sets have superior comprehensibility.
  •  
2.
  • Johansson, Ulf, et al. (författare)
  • Extending Nearest Neighbor Classification with Spheres of Confidence
  • 2008
  • Ingår i: Proceedings of the Twenty-First International FLAIRS Conference (FLAIRS 2008). - : AAAI Press. - 9781577353652 ; , s. 282-287
  • Konferensbidrag (refereegranskat)abstract
    • The standard kNN algorithm suffers from two major drawbacks: sensitivity to the parameter value k, i.e., the number of neighbors, and the use of k as a global constant that is independent of the particular region in which theexample to be classified falls. Methods using weighted voting schemes only partly alleviate these problems, since they still involve choosing a fixed k. In this paper, a novel instance-based learner is introduced that does not require kas a parameter, but instead employs a flexible strategy for determining the number of neighbors to consider for the specific example to be classified, hence using a local instead of global k. A number of variants of the algorithm are evaluated on 18 datasets from the UCI repository. The novel algorithm in its basic form is shown to significantly outperform standard kNN with respect to accuracy, and an adapted version of the algorithm is shown to be clearlyahead with respect to the area under ROC curve. Similar to standard kNN, the novel algorithm still allows for various extensions, such as weighted voting and axes scaling.
  •  
3.
  • Löfström, Tuve, et al. (författare)
  • Ensemble member selection using multi-objective optimization
  • 2009
  • Ingår i: IEEE Symposium on Computational Intelligence and Data Mining. - : IEEE conference proceedings. - 9781424427659 ; , s. 245-251
  • Konferensbidrag (refereegranskat)abstract
    • Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. Unfortunately, the problem of how to maximize ensemble accuracy is, especially for classification, far from solved. In essence, the key problem is to find a suitable criterion, typically based on training or selection set performance, highly correlated with ensemble accuracy on novel data. Several studies have, however, shown that it is difficult to come up with a single measure, such as ensemble or base classifier selection set accuracy, or some measure based on diversity, that is a good general predictor for ensemble test accuracy. This paper presents a novel technique that for each learning task searches for the most effective combination of given atomic measures, by means of a genetic algorithm. Ensembles built from either neural networks or random forests were empirically evaluated on 30 UCI datasets. The experimental results show that when using the generated combined optimization criteria to rank candidate ensembles, a higher test set accuracy for the top ranked ensemble was achieved, compared to using ensemble accuracy on selection data alone. Furthermore, when creating ensembles from a pool of neural networks, the use of the generated combined criteria was shown to generally outperform the use of estimated ensemble accuracy as the single optimization criterion.
  •  
4.
  • Löfström, Tuve, et al. (författare)
  • On the Use of Accuracy and Diversity Measures for Evaluating and Selecting Ensembles of Classifiers
  • 2008
  • Ingår i: 2008 Seventh International Conference on Machine Learning and Applications. - : IEEE. - 9780769534954 ; , s. 127-132
  • Konferensbidrag (refereegranskat)abstract
    • The test set accuracy for ensembles of classifiers selected based on single measures of accuracy and diversity as well as combinations of such measures is investigated. It is found that by combining measures, a higher test set accuracy may be obtained than by using any single accuracy or diversity measure. It is further investigated whether a multi-criteria search for an ensemble that maximizes both accuracy and diversity leads to more accurate ensembles than by optimizing a single criterion. The results indicate that it might be more beneficial to search for ensembles that are both accurate and diverse. Furthermore, the results show that diversity measures could compete with accuracy measures as selection criterion.
  •  
5.
  • Löfström, Tuve, et al. (författare)
  • The Problem with Ranking Ensembles Based on Training or Validation Performance
  • 2008
  • Ingår i: Proceedings of the International Joint Conference on Neural Networks. - : IEEE. - 9781424418213 - 9781424418206
  • Konferensbidrag (refereegranskat)abstract
    • The main purpose of this study was to determine whether it is possible to somehow use results on training or validation data to estimate ensemble performance on novel data. With the specific setup evaluated; i.e. using ensembles built from a pool of independently trained neural networks and targeting diversity only implicitly, the answer is a resounding no. Experimentation, using 13 UCI datasets, shows that there is in general nothing to gain in performance on novel data by choosing an ensemble based on any of the training measures evaluated here. This is despite the fact that the measures evaluated include all the most frequently used; i.e. ensemble training and validation accuracy, base classifier training and validation accuracy, ensemble training and validation AUC and two diversity measures. The main reason is that all ensembles tend to have quite similar performance, unless we deliberately lower the accuracy of the base classifiers. The key consequence is, of course, that a data miner can do no better than picking an ensemble at random. In addition, the results indicate that it is futile to look for an algorithm aimed at optimizing ensemble performance by somehow selecting a subset of available base classifiers.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5
Typ av publikation
konferensbidrag (5)
Typ av innehåll
refereegranskat (5)
Författare/redaktör
Johansson, Ulf (5)
Löfström, Tuve (4)
König, Rikard (1)
Sönströd, Cecilia (1)
Lärosäte
Kungliga Tekniska Högskolan (5)
Högskolan i Borås (5)
Jönköping University (4)
Stockholms universitet (3)
Språk
Engelska (5)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (5)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy