SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Munkoh Buabeng Edwin) "

Sökning: WFRF:(Munkoh Buabeng Edwin)

  • Resultat 1-2 av 2
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Adelani, David, et al. (författare)
  • A Few Thousand Translations Go A Long Way! Leveraging Pre-trained Models for African News Translation
  • 2022
  • Ingår i: NAACL 2022. - Stroudsburg : Association for Computational Linguistics. - 9781955917711 ; , s. 3053-3070
  • Konferensbidrag (refereegranskat)abstract
    • Recent advances in the pre-training of language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages are not well represented on the web and therefore excluded from the large-scale crawls used to create datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pre-training? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a new African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both to additional languages and to additional domains is to fine-tune large pre-trained models on small quantities of high-quality translation data.
  •  
2.
  • Adelani, David Ifeoluwa, et al. (författare)
  • MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition
  • 2022
  • Ingår i: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. - : Association for Computational Linguistics (ACL). ; , s. 4488-4508
  • Konferensbidrag (refereegranskat)abstract
    • African languages are spoken by over a billion people, but are underrepresented in NLP research and development. The challenges impeding progress include the limited availability of annotated datasets, as well as a lack of understanding of the settings where current methods are effective. In this paper, we make progress towards solutions for these challenges, focusing on the task of named entity recognition (NER). We create the largest human-annotated NER dataset for 20 African languages, and we study the behavior of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, demonstrating that the choice of source language significantly affects performance. We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points across 20 languages compared to using English. Our results highlight the need for benchmark datasets and models that cover typologically-diverse African languages.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-2 av 2

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy