SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:9789916219997 "

Search: L773:9789916219997

  • Result 1-6 of 6
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Cerniavski, Rafal, et al. (author)
  • Multilingual Automatic Speech Recognition for Scandinavian Languages
  • 2023
  • In: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa). - Tartu : University of Tartu. - 9789916219997 ; , s. 460-466
  • Conference paper (peer-reviewed)abstract
    • We investigate the effectiveness of multilingual automatic speech recognition models for Scandinavian languages by further fine-tuning a Swedish model on Swedish, Danish, and Norwegian. We first explore zero-shot models, which perform poorly across the three languages. However, we show that a multilingual model based on a strong Swedish model, further fine-tuned on all three languages, performs well for Norwegian and Danish, with a relatively low decrease in the performance for Swedish. With a language classification module, we improve the performance of the multilingual model even further.
  •  
3.
  • Farahani, Mehrdad, 1989, et al. (author)
  • An Empirical Study of Multitask Learning to Improve Open Domain Dialogue Systems
  • 2023
  • In: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), pages 347–357, Tórshavn, Faroe Islands. - : University of Tartu Library. - 1736-8197 .- 1736-6305. - 9789916219997
  • Conference paper (peer-reviewed)abstract
    • Autoregressive models used to generate responses in open-domain dialogue systems often struggle to take long-term context into account and to maintain consistency over a dialogue. Previous research in open-domain dialogue generation has shown that the use of auxiliary tasks can introduce inductive biases that encourage the model to improve these qualities. However, most previous research has focused on encoder-only or encoder/decoder models, while the use of auxiliary tasks in encoder-only autoregressive models is under-explored. This paper describes an investigation where four different auxiliary tasks are added to small and medium-sized GPT-2 models fine-tuned on the PersonaChat and DailyDialog datasets. The results show that the introduction of the new auxiliary tasks leads to small but consistent improvement in evaluations of the investigated models.
  •  
4.
  • Masciolini, Arianna, 1996 (author)
  • A query engine for L1-L2 parallel dependency treebanks
  • 2023
  • In: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), May 22-24, 2023 Tórshavn, Faroe Islands / Editors: Tanel Alumäe and Mark Fishel. - Tartu, Estonia : University of Tartu Library. - 1736-8197 .- 1736-6305. - 9789916219997
  • Conference paper (peer-reviewed)abstract
    • L1-L2 parallel dependency treebanks are learner corpora with interoperability as their main design goal. They consist of sentences produced by learners of a second language (L2) paired with native-like (L1) correction hypotheses. Rather than explicitly labelled for errors, these are annotated following the Universal Dependencies standard. This implies relying on tree queries for error retrieval. Work in this direction is, however, limited. We present a query engine for L1-L2 treebanks and evaluate it on two corpora, one manually validated and one automatically parsed.
  •  
5.
  • Stymne, Sara, 1977-, et al. (author)
  • Parser Evaluation for Analyzing Swedish 19th–20th Century Literature
  • 2023
  • In: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa). - Tartu : University of Tartu. - 9789916219997 ; , s. 335-346
  • Conference paper (peer-reviewed)abstract
    • In this study, we aim to find a parser for accurately identifying different types of subordinate clauses, and related phenomena, in 19th–20th-century Swedish literature. Since no test set is available for parsing from this time period, we propose a lightweight annotation scheme for annotating a single relation of interest per sentence. We train a variety of parsers for Swedish and compare evaluations on standard modern test sets and our targeted test set. We find clear trends in which parser types perform best on the standard test sets, but that performance is considerably more varied on the targeted test set. We believe that our proposed annotation scheme can be useful for complementing standard evaluations, with a low annotation effort.
  •  
6.
  • Zhou, Wei, et al. (author)
  • The Finer They Get: Combining Fine-Tuned Models For Better Semantic Change Detection
  • 2023
  • In: 24th Nordic Conference on Computational Linguistics (NoDaLiDa). - : Linköping University Electronic Press. - 1736-8197 .- 1736-6305. - 9789916219997
  • Conference paper (peer-reviewed)abstract
    • In this work we investigate the hypothesis that enriching contextualized models using fine-tuning tasks can improve their capacity to detect lexical semantic change (LSC). We include tasks aimed to capture both low-level linguistic information like part-of-speech tagging, as well as higher level (semantic) information. Through a series of analyses we demonstrate that certain combinations of fine-tuning tasks, like sentiment, syntactic information, and logical inference, bring large improvements to standard LSC models that are based only on standard language modeling. We test on the binary classification and ranking tasks of SemEval-2020 Task 1 and evaluate using both permutation tests and under transfer-learning scenarios.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-6 of 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view