SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Löfström Helena) "

Sökning: WFRF:(Löfström Helena)

  • Resultat 1-12 av 12
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Johansson, Ulf, et al. (författare)
  • Conformal Prediction for Accuracy Guarantees in Classification with Reject Option
  • 2023
  • Ingår i: Modeling Decisions for Artificial Intelligence. - : Springer. - 9783031334979 ; , s. 133-145, s. 133-145
  • Konferensbidrag (refereegranskat)abstract
    • A standard classifier is forced to predict the label of every test instance, even when confidence in the predictions is very low. In many scenarios, it would, however, be better to avoid making these predictions, maybe leaving them to a human expert. A classifier with that alternative is referred to as a classifier with reject option. In this paper, we propose an algorithm that, for a particular data set, automatically suggests a number of accuracy levels, which it will be able to meet perfectly, using a classifier with reject option. Since the basis of the suggested algorithm is conformal prediction, it comes with strong validity guarantees. The experimentation, using 25 publicly available two-class data sets, confirms that the algorithm obtains empirical accuracies very close to the requested levels. In addition, in an outright comparison with probabilistic predictors, including models calibrated with Platt scaling, the suggested algorithm clearly outperforms the alternatives.
  •  
2.
  • Löfström, Helena, et al. (författare)
  • Calibrated explanations : With uncertainty information and counterfactuals
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 246
  • Tidskriftsartikel (refereegranskat)abstract
    • While local explanations for AI models can offer insights into individual predictions, such as feature importance, they are plagued by issues like instability. The unreliability of feature weights, often skewed due to poorly calibrated ML models, deepens these challenges. Moreover, the critical aspect of feature importance uncertainty remains mostly unaddressed in Explainable AI (XAI). The novel feature importance explanation method presented in this paper, called Calibrated Explanations (CE), is designed to tackle these issues head-on. Built on the foundation of Venn-Abers, CE not only calibrates the underlying model but also delivers reliable feature importance explanations with an exact definition of the feature weights. CE goes beyond conventional solutions by addressing output uncertainty. It accomplishes this by providing uncertainty quantification for both feature weights and the model’s probability estimates. Additionally, CE is model-agnostic, featuring easily comprehensible conditional rules and the ability to generate counterfactual explanations with embedded uncertainty quantification. Results from an evaluation with 25 benchmark datasets underscore the efficacy of CE, making it stand as a fast, reliable, stable, and robust solution.
  •  
3.
  • Löfström, Helena, et al. (författare)
  • Interpretable instance-based text classification for social science research projects
  • 2018
  • Ingår i: Archives of Data Science, Series A. - : KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft. - 2363-9881. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • In this study, two groups of respondents have evaluated explanations generated from an instance-based explanation method called WITE (Weighted Instance-based Text Explanations). One group consisted of 24 non-experts who answered a web survey about the words characterising the concepts of the classes and the other group consisted of three senior researchers and three respondents from a media house in Sweden who answered a questionnaire with open questions. The data used originates from one of the researchers’ project on media consumption in Sweden. The results from the non-experts indicate that WITE identified many words that corresponded to the human understanding but also included some insignificant or contrary words as important. In the results from the expert evaluation, there were indications that there is a risk that the explanations could persuade the users of the correctness of a prediction, even if it is incorrect. Consequently, the study indicates that an explanation method could be seen as a new actor which is able to persuade and interact with the humans and cause a change in the results of the classification of a text.
  •  
4.
  • Löfström, Helena, et al. (författare)
  • Investigating the impact of calibration on the quality of explanations
  • 2023
  • Ingår i: Annals of Mathematics and Artificial Intelligence. - : Springer. - 1012-2443 .- 1573-7470.
  • Tidskriftsartikel (refereegranskat)abstract
    • Predictive models used in Decision Support Systems (DSS) are often requested to explain the reasoning to users. Explanations of instances consist of two parts; the predicted label with an associated certainty and a set of weights, one per feature, describing how each feature contributes to the prediction for the particular instance. In techniques like Local Interpretable Model-agnostic Explanations (LIME), the probability estimate from the underlying model is used as a measurement of certainty; consequently, the feature weights represent how each feature contributes to the probability estimate. It is, however, well-known that probability estimates from classifiers are often poorly calibrated, i.e., the probability estimates do not correspond to the actual probabilities of being correct. With this in mind, explanations from techniques like LIME risk becoming misleading since the feature weights will only describe how each feature contributes to the possibly inaccurate probability estimate. This paper investigates the impact of calibrating predictive models before applying LIME. The study includes 25 benchmark data sets, using Random forest and Extreme Gradient Boosting (xGBoost) as learners and Venn-Abers and Platt scaling as calibration methods. Results from the study show that explanations of better calibrated models are themselves better calibrated, with ECE and log loss for the explanations after calibration becoming more conformed to the model ECE and log loss. The conclusion is that calibration makes the models and the explanations better by accurately representing reality.
  •  
5.
  •  
6.
  • Elgemark, Karin, et al. (författare)
  • The 13.5-mg, 19.5-mg, and 52-mg Levonorgestrel-Releasing Intrauterine Systems and Risk of Ectopic Pregnancy
  • 2022
  • Ingår i: Obstetrics and Gynecology. - : NLM (Medline). - 0029-7844 .- 1873-233X. ; 140:2, s. 227-233
  • Tidskriftsartikel (refereegranskat)abstract
    • OBJECTIVE: To assess the Pearl Index for risk of ectopic pregnancy in women using levonorgestrel-releasing intrauterine systems (LNG-IUS) with hormonal reservoirs of 13.5 mg, 19.5 mg, or 52 mg. METHODS: This was a retrospective cohort study. Women diagnosed with an ectopic pregnancy in Stockholm County, Sweden, between January 1, 2014, and December 31, 2019, were identified through the electronic medical record system. The final analysis included 2,252 cases of ectopic pregnancy. Information on age, reproductive and medical history, as well as current use of contraception was retrieved. The time of intrauterine device (IUD) insertion before ectopic pregnancy and the numbers of sold LNG-IUS during the study period were used to calculate the incidence rate for ectopic pregnancy during use per 100 woman-years (Pearl Index). RESULTS: Among women with an ectopic pregnancy diagnosis, 105 presented with a known type of hormonal IUD in situ, of whom 94 were included in the calculations of the Pearl Index. The estimated Pearl Index for ectopic pregnancy was 0.136 (95% CI 0.106-0.176) for the LNG-IUS 13.5-mg, 0.037 (95% CI 0.021-0.067) for the LNG-IUS 19.5-mg, and 0.009 (95% CI 0.006-0.014) for the LNG-IUS 52-mg. With the 52-mg LNG-IUS as referent, the relative risk (RR) for ectopic pregnancy was higher during the first year for LNG 13.5-mg (RR 20.59, 95% CI 12.04-35.21), and for both 13.5-mg (RR 14.49, 95% CI 9.01-23.3) and 19.5-mg (RR 4.44, 95% CI 1.64-12.00) during the total study period. CONCLUSION: The absolute risk of ectopic pregnancy during the use of LNG-IUS at any doses was low. The results show that the lower the dose of the IUD, the higher the risk of an ectopic pregnancy. Higher-dose LNG-IUS should be considered when providing contraceptive counseling to a woman with known risk factors for ectopic pregnancy who are considering a hormonal IUD. Copyright © 2022 by the American College of Obstetricians and Gynecologists. Published by Wolters Kluwer Health, Inc. All rights reserved.
  •  
7.
  • Lindberg, Helena, et al. (författare)
  • Risk stratification score screening for infective endocarditis in patients with Gram-positive bacteraemia
  • 2022
  • Ingår i: Infectious Diseases. - : Informa UK Limited. - 2374-4235 .- 2374-4243. ; 54:7, s. 488-496
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: A feared cause of bacteraemia with Gram-positives is infective endocarditis. Risk stratification scores can aid clinicians in determining the risk of endocarditis. Six proposed scores for the use in bacteraemia; Staphylococcus aureus (PREDICT, VIRSTA, POSITIVE), non-β-haemolytic streptococci (HANDOC) and Enterococcus faecalis (NOVA, DENOVA) were validated for predictive ability and the utilization of echocardiography was investigated. Methods: Hospitalized adult patients with Gram-positive bacteraemia during 2017–2019 were evaluated retrospectively through medical records and the Swedish Death Registry. Baseline and score-specific data, definite endocarditis and echocardiographies performed were recorded. Sensitivity, specificity, negative and positive predictive values and echocardiography utilization were determined. Results: 480 patients with bacteraemia were included and definite endocarditis was diagnosed in 20 (7.5%), 10 (6.6%), and 2 (3.2%) patients with S. aureus, non-β-haemolytic streptococci and E. faecalis, respectively. The sensitivities of the scores were 80–100% and specificities 8–77%. Negative predictive values of the six scores were 98–100%. VIRSTA, HANDOC, NOVA and DENOVA identified all, the PREDICT5 score missed 1/20 and the POSITIVE score missed 4/20 cases of endocarditis. Transoesophageal echocardiography was performed in 141 patients (29%). Thus, the risk stratification scores suggested an increase of 3–63 (7–77%) investigations with echocardiography. Conclusions: All scores had negative-predictive values over 98%, therefore it can be concluded that PREDICT5, VIRSTA, POSITIVE, HANDOC and DENOVA are reasonable screening tools for endocarditis early on in Gram-positive bacteraemia. The use of risk stratification scores will lead to more echocardiographies.
  •  
8.
  • Löfström, Helena, et al. (författare)
  • A Meta Survey of Quality Evaluation Criteria in Explanation Methods
  • 2022
  • Ingår i: Intelligent Information Systems. CAiSE 2022. Lecture Notes in Business Information Processing. - Cham : Springer International Publishing. - 9783031074806 - 9783031074813 ; , s. 55-63
  • Konferensbidrag (refereegranskat)abstract
    • The evaluation of explanation methods has become a significant issue in explainable artificial intelligence (XAI) due to the recent surge of opaque AI models in decision support systems (DSS). Explanations are essential for bias detection and control of uncertainty since most accurate AI models are opaque with low transparency and comprehensibility. There are numerous criteria to choose from when evaluating explanation method quality. However, since existing criteria focus on evaluating single explanation methods, it is not obvious how to compare the quality of different methods.In this paper, we have conducted a semi-systematic meta-survey over fifteen literature surveys covering the evaluation of explainability to identify existing criteria usable for comparative evaluations of explanation methods.The main contribution in the paper is the suggestion to use appropriate trust as a criterion to measure the outcome of the subjective evaluation criteria and consequently make comparative evaluations possible. We also present a model of explanation quality aspects. In the model, criteria with similar definitions are grouped and related to three identified aspects of quality; model, explanation, and user. We also notice four commonly accepted criteria (groups) in the literature, covering all aspects of explanation quality: Performance, appropriate trust, explanation satisfaction, and fidelity. We suggest the model be used as a chart for comparative evaluations to create more generalisable research in explanation quality.
  •  
9.
  • Löfström, Helena (författare)
  • On the definition of appropriate trust : and the tools that come with it
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Evaluating the efficiency of human-AI interactions is challenging, including subjective and objective quality aspects. With the focus on the human experience of the explanations, evaluations of explanation methods have become mostly subjective, making comparative evaluations almost impossible and highly linked to the individual user. However, it is commonly agreed that one aspect of explanation quality is how effectively the user can detect if the predictions are trustworthy and correct, i.e., if the explanations can increase the user’s appropriate trust in the model. This paper starts with the definitions of appropriate trust from the literature. It compares the definitions with model performance evaluation, showing the strong similarities between appropriate trust and model performance evaluation. The paper’s main contribution is a novel approach to evaluating appropriate trust by taking advantage of the likenesses between definitions. The paper offers several straightforward evaluation methods for different aspects of user performance, including suggesting a method for measuring uncertainty and appropriate trust in regression.
  •  
10.
  • Löfström, Helena (författare)
  • On the Definition of Appropriate Trust and the Tools that Come with it
  • 2023
  • Ingår i: 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE). - : Institute of Electrical and Electronics Engineers (IEEE). - 9798350327595 ; , s. 1555-1562
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Evaluating the efficiency of human-AI interactions is challenging, including subjective and objective quality aspects. With the focus on the human experience of the explanations, evaluations of explanation methods have become mostly subjective, making comparative evaluations almost impossible and highly linked to the individual user. However, it is commonly agreed that one aspect of explanation quality is how effectively the user can detect if the predictions are trustworthy and correct, i.e., if the explanations can increase the user's appropriate trust in the model. This paper starts with the definitions of appropriate trust from the literature. It compares the definitions with model performance evaluation, showing the strong similarities between appropriate trust and model performance evaluation. The paper's main contribution is a novel approach to evaluating appropriate trust by taking advantage of the likenesses between definitions. The paper offers several straightforward evaluation methods for different aspects of user performance, including suggesting a method for measuring uncertainty and appropriate trust in regression.
  •  
11.
  • Löfström, Helena (författare)
  • Trustworthy explanations : Improved decision support through well-calibrated uncertainty quantification
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The use of Artificial Intelligence (AI) has transformed fields like disease diagnosis and defence. Utilising sophisticated Machine Learning (ML) models, AI predicts future events based on historical data, introducing complexity that challenges understanding and decision-making. Previous research emphasizes users’ difficulty discerning when to trust predictions due to model complexity, underscoring addressing model complexity and providing transparent explanations as pivotal for facilitating high-quality decisions.Many ML models offer probability estimates for predictions, commonly used in methods providing explanations to guide users on prediction confidence. However, these probabilities often do not accurately reflect the actual distribution in the data, leading to potential user misinterpretation of prediction trustworthiness. Additionally, most explanation methods fail to convey whether the model’s probability is linked to any uncertainty, further diminishing the reliability of the explanations.Evaluating the quality of explanations for decision support is challenging, and although highlighted as essential in research, there are no benchmark criteria for comparative evaluations.This thesis introduces an innovative explanation method that generates reliable explanations, incorporating uncertainty information supporting users in determining when to trust the model’s predictions. The thesis also outlines strategies for evaluating explanation quality and facilitating comparative evaluations. Through empirical evaluations and user studies, the thesis provides practical insights to support decision-making utilising complex ML models.
  •  
12.
  • Löfström, Mikael, et al. (författare)
  • Samordningsförbundet FinsamGotland : en studie av hur samverkan implementeras genom samordningsförbund.
  • 2014
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Samordningsförbundet FinsamGotland bildades år 2007 och har varit formellt verksamt sedan 2008. Inom ramen för förbundet samarbetar Arbetsförmedlin - gen, Försäkringskassan och Region Gotland. Som samordningsförbund är det en egen juridisk person och utgår från lagen om finansiell samordning av reha - biliteringsinsatser som funnits sedan 1 januari 2004 (SFS 2003: 1210). Samordningsförbundet har som syfte att gentemot målgrupper som har behov av insatser från två eller flera myndigheter samordna sina insatser. Utöver att olika effektivare insatser prövas för människor som hamnat utan - för arbetslivet skapas möjligheter att utveckla nya arbetssätt och nya former för organisering av samverkan. Samordningsförbundet har således som främsta syfte att öka nyttan för brukare av de olika välfärdstjänsterna genom att främja samverkan mellan de olika myndigheternas verksamhet i det geografiska område som utgör Sam- ordningsförbundet FinsamGotland. Det medför att man genom samverkan etablerar specifika verksamheter och forum för att mötas såväl i den operativa verksamheten som mellan ledningsstrukturer.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-12 av 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy