SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Pavlopoulos Ioannis) "

Sökning: WFRF:(Pavlopoulos Ioannis)

  • Resultat 1-10 av 10
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Kougia, Vasiliki, et al. (författare)
  • RTEX : A novel framework for ranking, tagging, and explanatory diagnostic captioning of radiography exams
  • 2021
  • Ingår i: JAMIA Journal of the American Medical Informatics Association. - : Oxford University Press (OUP). - 1067-5027 .- 1527-974X. ; 28:8, s. 1651-1659
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: The study sought to assist practitioners in identifying and prioritizing radiography exams that are more likely to contain abnormalities, and provide them with a diagnosis in order to manage heavy workload more efficiently (eg, during a pandemic) or avoid mistakes due to tiredness.Materials and MethodsThis article introduces RTEx, a novel framework for (1) ranking radiography exams based on their probability to be abnormal, (2) generating abnormality tags for abnormal exams, and (3) providing a diagnostic explanation in natural language for each abnormal exam. Our framework consists of deep learning and retrieval methods and is assessed on 2 publicly available datasets.Results: For ranking, RTEx outperforms its competitors in terms of nDCG@k. The tagging component outperforms 2 strong competitor methods in terms of F1. Moreover, the diagnostic captioning component, which exploits the predicted tags to constrain the captioning process, outperforms 4 captioning competitors with respect to clinical precision and recall.Discussion: RTEx prioritizes abnormal exams toward the improvement of the healthcare workflow by introducing a ranking method. Also, for each abnormal radiography exam RTEx generates a set of abnormality tags alongside a diagnostic text to explain the tags and guide the medical expert. Human evaluation of the produced text shows that employing the generated tags offers consistency to the clinical correctness and that the sentences of each text have high clinical accuracy.Conclusions: This is the first framework that successfully combines 3 tasks: ranking, tagging, and diagnostic captioning with focus on radiography exams that contain abnormalities.
  •  
2.
  • Olczak, Jakub, et al. (författare)
  • Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders : an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal
  • 2021
  • Ingår i: Acta Orthopaedica. - : Taylor & Francis. - 1745-3674 .- 1745-3682. ; 92:5, s. 513-525
  • Tidskriftsartikel (refereegranskat)abstract
    • Background and purpose - Artificial intelligence (AI), deep learning (DL), and machine learning (ML) have become common research fields in orthopedics and medicine in general. Engineers perform much of the work. While they gear the results towards healthcare professionals, the difference in competencies and goals creates challenges for collaboration and knowledge exchange. We aim to provide clinicians with a context and understanding of AI research by facilitating communication between creators, researchers, clinicians, and readers of medical AI and ML research. Methods and results - We present the common tasks, considerations, and pitfalls (both methodological and ethical) that clinicians will encounter in AI research. We discuss the following topics: labeling, missing data, training, testing, and overfitting. Common performance and outcome measures for various AI and ML tasks are presented, including accuracy, precision, recall, F1 score, Dice score, the area under the curve, and ROC curves. We also discuss ethical considerations in terms of privacy, fairness, autonomy, safety, responsibility, and liability regarding data collecting or sharing. Interpretation - We have developed guidelines for reporting medical AI research to clinicians in the run-up to a broader consensus process. The proposed guidelines consist of a Clinical Artificial Intelligence Research (CAIR) checklist and specific performance metrics guidelines to present and evaluate research using AI components. Researchers, engineers, clinicians, and other stakeholders can use these proposal guidelines and the CAIR checklist to read, present, and evaluate AI research geared towards a healthcare setting.
  •  
3.
  • Pavlopoulos, Ioannis, 1983-, et al. (författare)
  • Automotive fault nowcasting with machine learning and natural language processing
  • 2024
  • Ingår i: Machine Learning. - 0885-6125 .- 1573-0565. ; 113:2, s. 843-861
  • Tidskriftsartikel (refereegranskat)abstract
    • Automated fault diagnosis can facilitate diagnostics assistance, speedier troubleshooting, and better-organised logistics. Currently, most AI-based prognostics and health management in the automotive industry ignore textual descriptions of the experienced problems or symptoms. With this study, however, we propose an ML-assisted workflow for automotive fault nowcasting that improves on current industry standards. We show that a multilingual pre-trained Transformer model can effectively classify the textual symptom claims from a large company with vehicle fleets, despite the task’s challenging nature due to the 38 languages and 1357 classes involved. Overall, we report an accuracy of more than 80% for high-frequency classes and above 60% for classes with reasonable minimum support, bringing novel evidence that automotive troubleshooting management can benefit from multilingual symptom text classification.
  •  
4.
  • Korre, Katerina, et al. (författare)
  • ERRANT : Assessing and Improving Grammatical Error Type Classification
  • 2020
  • Ingår i: Proceedings of the The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature. - : Association for Computational Linguistics. ; , s. 85-89
  • Konferensbidrag (refereegranskat)abstract
    • Grammatical Error Correction (GEC) is the task of correcting different types of errors in written texts. To manage this task, large amounts of annotated data that contain erroneous sentences are required. This data, however, is usually annotated according to each annotator’s standards, making it difficult to manage multiple sets of data at the same time. The recently introduced Error Annotation Toolkit (ERRANT) tackled this problem by presenting a way to automatically annotate data that contain grammatical errors, while also providing a standardisation for annotation. ERRANT extracts the errors and classifies them into error types, in the form of an edit that can be used in the creation of GEC systems, as well as for grammatical error analysis. However, we observe that certain errors are falsely or ambiguously classified. This could obstruct any qualitative or quantitative grammatical error type analysis, as the results would be inaccurate. In this work, we use a sample of the FCE coprus (Yannakoudakis et al., 2011) for secondary error type annotation and we show that up to 39% of the annotations of the most frequent type should be re-classified. Our corrections will be publicly released, so that they can serve as the starting point of a broader, collaborative, ongoing correction process.
  •  
5.
  • Ljungman, Jimmy, et al. (författare)
  • Automated Grading of Exam Responses : An Extensive Classification Benchmark
  • 2021
  • Ingår i: Discovery Science. - Cham : Springer Nature. - 9783030889418 - 9783030889425 ; , s. 3-18
  • Konferensbidrag (refereegranskat)abstract
    • Automated grading of free-text exam responses is a very challenging task due to the complex nature of the problem, such as lack of training data and biased ground-truth of the graders. In this paper, we focus on the automated grading of free-text responses. We formulate the problem as a binary classification problem of two class labels: low- and high-grade. We present a benchmark on four machine learning methods using three experiment protocols on two real-world datasets, one from Cyber-crime exams in Arabic and one from Data Mining exams in English that is presented first time in this work. By providing various metrics for binary classification and answer ranking, we illustrate the benefits and drawbacks of the benchmarked methods. Our results suggest that standard models with individual word representations can in some cases achieve competitive predictive performance against deep neural language models using context-based representations on both binary classification and answer ranking for free-text response grading tasks. Lastly, we discuss the pedagogical implications of our findings by identifying potential pitfalls and challenges when building predictive models for such tasks.
  •  
6.
  • Miliou, Ioanna, et al. (författare)
  • Sentiment Nowcasting during the COVID-19 Pandemic
  • 2021
  • Ingår i: Discovery Science. - Cham : Springer Nature. - 9783030889418 - 9783030889425 ; , s. 218-228
  • Konferensbidrag (refereegranskat)abstract
    • In response to the COVID-19 pandemic, governments around the world are taking a wide range of measures. Previous research on COVID-19 has focused on disease spreading, epidemic curves, measures to contain it, confirmed cases, and deaths. In this work, we sought to explore another essential aspect of this pandemic, how do people feel and react to this reality and the impact on their emotional well-being. For that reason, we propose using epidemic indicators and government policy responses to estimate the sentiment, as this is expressed on Twitter. We develop a nowcasting approach that exploits the time series of epidemic indicators and the measures taken in response to the COVID-19 outbreak in the United States of America to predict the public sentiment at a daily frequency. Using machine learning models, we improve the short-term forecasting accuracy of autoregressive models, revealing the value of incorporating the additional data in the predictive models. We then provide explanations to the indicators and measures that drive the predictions for specific dates. Our work provides evidence that data about the way COVID-19 evolves along with the measures taken in response to the COVID-19 outbreak can be used effectively to improve sentiment nowcasting and gain insights into people’s current emotional state.
  •  
7.
  • Pavlopoulos, Ioannis, et al. (författare)
  • Clinical predictive keyboard using statistical and neural language modeling
  • 2020
  • Ingår i: 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). - : IEEE. - 9781728194301 - 9781728194295 ; , s. 293-296
  • Konferensbidrag (refereegranskat)abstract
    • A language model can be used to predict the next word during authoring, to correct spelling or to accelerate writing (e.g., in sms or emails). Language models, however, have only been applied in a very small scale to assist physicians during authoring (e.g., discharge summaries or radiology reports). But along with the assistance to the physician, computer-based systems which expedite the patient's exit also assist in decreasing the hospital infections. We employed statistical and neural language modeling to predict the next word of a clinical text and assess all the models in terms of accuracy and keystroke discount in two datasets with radiology reports. We show that a neural language model can achieve as high as 51.3% accuracy in radiology reports (one out of two words predicted correctly). We also show that even when the models are employed only for frequent words, the physician can save valuable time.
  •  
8.
  • Pavlopoulos, Ioannis, et al. (författare)
  • Computational authorship analysis of the homeric poems
  • 2023
  • Ingår i: International Journal of Digital Humanities. - : Springer Science and Business Media LLC. - 2524-7832 .- 2524-7840. ; 5:1, s. 45-64
  • Tidskriftsartikel (refereegranskat)abstract
    • Natural language modeling is used to predict or generate the next word or character of modern languages. Furthermore, statistical character-based language models have been found useful in authorship attribution analyses by studying the linguistic proximity of excerpts unknown to the model. In prior work, we modeled Homeric language and provided empirical findings regarding the authorship nature of the 48 Iliad and Odyssey books. Following this line of work, and considering the current philological views and trends, we break down the two poems further into smaller portions. By employing language modeling we identify outlying passages, indicating reduced linguistic affinity with the main body of the two works and, by extension, potentially different authorship. Our results show that some of the passages isolated as outliers by the language models were also identified as such by human researchers. We further test our methodology and models on texts of similar language and genre created by other authors, namely Hesiod’s “Theogony” and “Work and Days”.
  •  
9.
  • Pavlopoulos, Ioannis, et al. (författare)
  • Customized Neural Predictive Medical Text : A Use-Case on Caregivers
  • 2021
  • Ingår i: Artificial Intelligence in Medicine. - Cham : Springer. - 9783030772109 ; , s. 438-443
  • Konferensbidrag (refereegranskat)abstract
    • Predictive text can speed up authoring of everyday tasks, such as writing an SMS or a URL. When deployed in a clinical setting, it can enable practitioners to compile diagnostic text reports in a speedier manner, hence allowing them to be more time-efficient when examining patients. The language used by medical practitioners when authoring clinical reports is, however, far from common, not only between practitioners but also between medical units. In this paper, we demonstrate this clinical language variation, by showing that a model trained on texts written by some physicians may not work for predicting the text of others. We use a dataset created out of the clinical notes of 17 caregivers to show that language models trained on the notes of each caregiver outperform the ones trained with texts from several ones.
  •  
10.
  • Pavlopoulos, Ioannis, et al. (författare)
  • Distance from Unimodality for the Assessment of Opinion Polarization
  • 2023
  • Ingår i: Cognitive Computation. - : Springer Science and Business Media LLC. - 1866-9956 .- 1866-9964. ; 15:2, s. 731-738
  • Tidskriftsartikel (refereegranskat)abstract
    • Commonsense knowledge is often approximated by the fraction of annotators who classified an item as belonging to the positive class. Instances for which this fraction is equal to or above 50% are considered positive, including however ones that receive polarized opinions. This is a problematic encoding convention that disregards the potentially polarized nature of opinions and which is often employed to estimate subjectivity, sentiment polarity, and toxic language. We present the distance from unimodality (DFU), a novel measure that estimates the extent of polarization on a distribution of opinions and which correlates well with human judgment. We applied DFU to two use cases. The first case concerns tweets created over 9 months during the pandemic. The second case concerns textual posts crowd-annotated for toxicity. We specified the days for which the sentiment-annotated tweets were determined as polarized based on the DFU measure and we found that polarization occurred on different days for two different states in the USA. Regarding toxicity, we found that polarized opinions are more likely by annotators originating from different countries. Moreover, we show that DFU can be exploited as an objective function to train models to predict whether a post will provoke polarized opinions in the future.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 10

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy