SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kurfalı Murathan) srt2:(2021)"

Sökning: WFRF:(Kurfalı Murathan) > (2021)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Kurfalı, Murathan, 1990-, et al. (författare)
  • Breaking the Narrative: Scene Segmentation through Sequential Sentence Classification
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we describe our submission to the Shared Task on Scene Segmentation (STSS). The shared task requires participants to segment novels into coherent segments, called scenes. We approach this as a sequential sentence classification task and offer a BERT-based solution with a weighted cross-entropy loss. According to the results, the proposed approach performs relatively well on the task as our model ranks first and second, in official in-domain and out-domain evaluations, respectively. However, the overall low performances (0.37 F1-score) suggest that there is still much room for improvement.
  •  
3.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • Let’s be explicit about that : Distant supervision for implicit discourse relation classification via connective prediction
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • In implicit discourse relation classification, we want to predict the relation between adjacent sentences in the absence of any overt discourse connectives. This is challenging even for humans, leading to shortage of annotated data, a fact that makes the task even more difficult for supervised machine learning approaches. In the current study, we perform implicit discourse relation classification without relying on any labeled implicit relation. We sidestep the lack of data through explicitation of implicit relations to reduce the task to two sub-problems: language modeling and explicit discourse relation classification, a much easier problem. Our experimental results show that this method can even marginally outperform the state-of-the-art, in spite of being much simpler than alternative models of comparable performance. Moreover, we show that the achieved performance is robust across domains as suggested by the zero-shot experiments on a completely different domain. This indicates that recent advances in language modeling have made language models sufficiently good at capturing inter-sentence relations without the help of explicit discourse markers.
  •  
4.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • Probing Multilingual Language Models for Discourse
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • Pre-trained multilingual language models have become an important building block in multilingual natural language processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge across languages. This is done with a systematic evaluation on a broader set of discourse-level tasks than has been previously been assembled. We find that the XLM-RoBERTa family of models consistently show the best performance, by simultaneously being good monolingual models and degrading relatively little in a zero-shot setting. Our results also indicate that model distillation may hurt the ability of cross-lingual transfer of sentence representations, while language dissimilarity at most has a modest effect. We hope that our test suite, covering 5 tasks with a total of 22 languages in 10 distinct families, will serve as a useful evaluation platform for multilingual performance at and beyond the sentence level. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy