SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Al Abasse Yosef) "

Search: WFRF:(Al Abasse Yosef)

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Danielsson, Benjamin, et al. (author)
  • Classifying Implant-Bearing Patients via their Medical Histories : a Pre-Study on Swedish EMRs with Semi-Supervised GAN-BERT
  • 2022
  • In: 2022 Language Resources and Evaluation Conference, LREC 2022. - : European Language Resources Association (ELRA). - 9791095546726 ; , s. 5428-5435
  • Conference paper (peer-reviewed)abstract
    • In this paper, we compare the performance of two BERT-based text classifiers whose task is to classify patients (more precisely, their medical histories) as having or not having implant(s) in their body. One classifier is a fully-supervised BERT classifier. The other one is a semi-supervised GAN-BERT classifier. Both models are compared against a fully-supervised SVM classifier. Since fully-supervised classification is expensive in terms of data annotation, with the experiments presented in this paper, we investigate whether we can achieve a competitive performance with a semi-supervised classifier based only on a small amount of annotated data. Results are promising and show that the semi-supervised classifier has a competitive performance when compared with the fully-supervised classifier. © licensed under CC-BY-NC-4.0.
  •  
2.
  • Danielsson, Bengt, et al. (author)
  • Classifying Implant-Bearing Patients via their Medical Histories: a Pre-Study on Swedish EMRs with Semi-Supervised GAN-BERT
  • 2022
  • In: LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION. - : EUROPEAN LANGUAGE RESOURCES ASSOC-ELRA. - 9791095546726 ; , s. 5428-5435
  • Conference paper (peer-reviewed)abstract
    • In this paper, we compare the performance of two BERT-based text classifiers whose task is to classify patients (more precisely, their medical histories) as having or not having implant(s) in their body. One classifier is a fully-supervised BERT classifier. The other one is a semi-supervised GAN-BERT classifier. Both models are compared against a fully-supervised SVM classifier. Since fully-supervised classification is expensive in terms of data annotation, with the experiments presented in this paper, we investigate whether we can achieve a competitive performance with a semi-supervised classifier based only on a small amount of annotated data. Results are promising and show that the semi-supervised classifier has a competitive performance when compared with the fully-supervised classifier.
  •  
3.
  • Jerdhaf, Oskar, et al. (author)
  • Evaluating Pre-Trained Language Models for Focused Terminology Extraction from Swedish Medical Records
  • 2022
  • In: Proceedings of the Workshop on Terminology in the 21st century. - : European Language Resources Association. - 9791095546955 ; , s. 30-32, s. 30-32
  • Conference paper (peer-reviewed)abstract
    • In the experiments briefly presented in this abstract, we compare the performance of a generalist Swedish pre-trained language model with a domain-specific Swedish pre-trained model on the downstream task of focused terminology extraction of implant terms, which are terms that indicate the presence of implants in the body of patients. The fine-tuning is identical for both models. For the search strategy we rely on KD-Tree that we feed with two different lists of term seeds, one with noise and one without noise. Results shows that the use of a domain-specific pre-trained language model has a positive impact on focused terminology extraction only when using term seeds without noise.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view