SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kougia Vasiliki) "

Sökning: WFRF:(Kougia Vasiliki)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Karatzas, Basil, et al. (författare)
  • AUEB NLP Group at ImageCLEFmed Caption 2020
  • 2020
  • Ingår i: CEUR Workshop Proceedings.
  • Konferensbidrag (refereegranskat)abstract
    • This article concerns the participation of AUEB’s NLP Group in the ImageCLEFmed Caption task of 2020. The goal of the task was to identify medical terms that best describe each image, in order to accelerate and improve the interpretation of medical images by experts and systems. The systems we implemented extend our previous work [7,8,9] on models that employ CNN image encoders combined with an image retrieval method or a feed-forward neural network. Our systems were ranked 1st, 2nd and 6th.
  •  
2.
  • Kougia, Vasiliki, et al. (författare)
  • Medical Image Tagging by Deep Learning and Retrieval
  • 2020
  • Ingår i: Experimental IR Meets Multilinguality, Multimodality, and Interaction. - Cham : Springer. - 9783030582180 - 9783030582197 ; , s. 154-166
  • Konferensbidrag (refereegranskat)abstract
    • Radiologists and other qualified physicians need to examine and interpret large numbers of medical images daily. Systems that would help them spot and report abnormalities in medical images could speed up diagnostic workflows. Systems that would help exploit past diagnoses made by highly skilled physicians could also benefit their more junior colleagues. A task that systems can perform towards this end is medical image classification, which assigns medical concepts to images. This task, called Concept Detection, was part of the ImageCLEF 2019 competition. We describe the methods we implemented and submitted to the Concept Detection 2019 task, where we achieved the best performance with a deep learning method we call ConceptCXN. We also show that retrieval-based methods can perform very well in this task, when combined with deep learning image encoders. Finally, we report additional post-competition experiments we performed to shed more light on the performance of our best systems. Our systems can be installed through PyPi as part of the BioCaption package.
  •  
3.
  • Kougia, Vasiliki, et al. (författare)
  • RTEX : A novel framework for ranking, tagging, and explanatory diagnostic captioning of radiography exams
  • 2021
  • Ingår i: JAMIA Journal of the American Medical Informatics Association. - : Oxford University Press (OUP). - 1067-5027 .- 1527-974X. ; 28:8, s. 1651-1659
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: The study sought to assist practitioners in identifying and prioritizing radiography exams that are more likely to contain abnormalities, and provide them with a diagnosis in order to manage heavy workload more efficiently (eg, during a pandemic) or avoid mistakes due to tiredness.Materials and MethodsThis article introduces RTEx, a novel framework for (1) ranking radiography exams based on their probability to be abnormal, (2) generating abnormality tags for abnormal exams, and (3) providing a diagnostic explanation in natural language for each abnormal exam. Our framework consists of deep learning and retrieval methods and is assessed on 2 publicly available datasets.Results: For ranking, RTEx outperforms its competitors in terms of nDCG@k. The tagging component outperforms 2 strong competitor methods in terms of F1. Moreover, the diagnostic captioning component, which exploits the predicted tags to constrain the captioning process, outperforms 4 captioning competitors with respect to clinical precision and recall.Discussion: RTEx prioritizes abnormal exams toward the improvement of the healthcare workflow by introducing a ranking method. Also, for each abnormal radiography exam RTEx generates a set of abnormality tags alongside a diagnostic text to explain the tags and guide the medical expert. Human evaluation of the produced text shows that employing the generated tags offers consistency to the clinical correctness and that the sentences of each text have high clinical accuracy.Conclusions: This is the first framework that successfully combines 3 tasks: ranking, tagging, and diagnostic captioning with focus on radiography exams that contain abnormalities.
  •  
4.
  • Pavlopoulos, John, et al. (författare)
  • Diagnostic captioning : a survey
  • 2022
  • Ingår i: Knowledge and Information Systems. - : Springer Science and Business Media LLC. - 0219-1377 .- 0219-3116. ; 64:7, s. 1691-1722
  • Tidskriftsartikel (refereegranskat)abstract
    • Diagnostic captioning (DC) concerns the automatic generation of a diagnostic text from a set of medical images of a patient collected during an examination. DC can assist inexperienced physicians, reducing clinical errors. It can also help experienced physicians produce diagnostic reports faster. Following the advances of deep learning, especially in generic image captioning, DC has recently attracted more attention, leading to several systems and datasets. This article is an extensive overview of DC. It presents relevant datasets, evaluation measures, and up-to-date systems. It also highlights shortcomings that hinder DC’s progress and proposes future directions.
  •  
5.
  • Wang, Zhendong, et al. (författare)
  • Style-transfer counterfactual explanations : An application to mortality prevention of ICU patients
  • 2023
  • Ingår i: Artificial Intelligence in Medicine. - : Elsevier BV. - 0933-3657 .- 1873-2860. ; 135
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, machine learning methods have been rapidly adopted in the medical domain. However, current state-of-the-art medical mining methods usually produce opaque, black-box models. To address the lack of model transparency, substantial attention has been given to developing interpretable machine learning models. In the medical domain, counterfactuals can provide example-based explanations for predictions, and show practitioners the modifications required to change a prediction from an undesired to a desired state. In this paper, we propose a counterfactual solution MedSeqCF for preventing the mortality of three cohorts of ICU patients, by representing their electronic health records as medical event sequences, and generating counterfactuals by adopting and employing a text style-transfer technique. We propose three model augmentations for MedSeqCF to integrate additional medical knowledge for generating more trustworthy counterfactuals. Experimental results on the MIMIC-III dataset strongly suggest that augmented style-transfer methods can be effectively adapted for the problem of counterfactual explanations in healthcare applications and can further improve the model performance in terms of validity, BLEU-4, local outlier factor, and edit distance. In addition, our qualitative analysis of the results by consultation with medical experts suggests that our style-transfer solutions can generate clinically relevant and actionable counterfactual explanations.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy