SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kurfalı Murathan) "

Sökning: WFRF:(Kurfalı Murathan)

  • Resultat 1-24 av 24
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Andersson, Marta, et al. (författare)
  • A sentiment-annotated dataset of English causal connectives
  • 2020
  • Ingår i: Proceedings of the 14th Linguistic Annotation Workshop. - 9781952148330 ; , s. 24-33
  • Konferensbidrag (refereegranskat)abstract
    • This paper investigates the semantic prosody of three causal connectives: due to, owing to and because of in seven varieties of the English language. While research in the domain of English causality exists, we are not aware of studies that would cover the domain of causal connectives in English. Our claim is that connectives such as because of link two arguments, (at least) one of which will include a phrase that contributes to the interpretation of the relation as positive or negative, and hence define the prosody of the connective used. As our results demonstrate, the majority of the prosodies identified are negative for all three connectives; the proportions are stable across the varieties of English studied, and contrary to our expectations, we find no significant differences between the functions of the connectives and discourse preferences. Further, we investigate whether automatizing the sentiment annotation procedure via a simple language-model based classifier is possible. The initial results highlights the complexity of the task and the need for complicated systems, probably aided with other related datasets to achieve reasonable performance.
  •  
3.
  •  
4.
  • Buchanan, E. M., et al. (författare)
  • The Psychological Science Accelerator's COVID-19 rapid-response dataset
  • 2023
  • Ingår i: Scientific Data. - : Springer Science and Business Media LLC. - 2052-4463. ; 10:1
  • Tidskriftsartikel (refereegranskat)abstract
    • In response to the COVID-19 pandemic, the Psychological Science Accelerator coordinated three large-scale psychological studies to examine the effects of loss-gain framing, cognitive reappraisals, and autonomy framing manipulations on behavioral intentions and affective measures. The data collected (April to October 2020) included specific measures for each experimental study, a general questionnaire examining health prevention behaviors and COVID-19 experience, geographical and cultural context characterization, and demographic information for each participant. Each participant started the study with the same general questions and then was randomized to complete either one longer experiment or two shorter experiments. Data were provided by 73,223 participants with varying completion rates. Participants completed the survey from 111 geopolitical regions in 44 unique languages/dialects. The anonymized dataset described here is provided in both raw and processed formats to facilitate re-use and further analyses. The dataset offers secondary analytic opportunities to explore coping, framing, and self-determination across a diverse, global sample obtained at the onset of the COVID-19 pandemic, which can be merged with other time-sampled or geographic data.
  •  
5.
  • Kurfali, Murathan, et al. (författare)
  • A distantly supervised Grammatical Error Detection/Correction system for Swedish
  • 2023
  • Ingår i: Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning. - 9789180752503 ; , s. 35-39
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents our submission to the first Shared Task on Multilingual Grammatical Error Detection (MultiGED-2023). Our method utilizes a transformer-based sequence-to-sequence model, which was trained on a synthetic dataset consisting of 3.2 billion words. We adopt a distantly supervised approach, with the training process relying exclusively on the distribution of language learners' errors extracted from the annotated corpus used to construct the training data. In the Swedish track, our model ranks fourth out of seven submissions in terms of the target F0.5 metric, while achieving the highest precision. These results suggest that our model is conservative yet remarkably precise in its predictions.
  •  
6.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • A Multi-Word Expression Dataset for Swedish
  • 2020
  • Ingår i: Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020). - Marseille : European Language Resources Association (ELRA). ; , s. 4402-4409
  • Konferensbidrag (refereegranskat)abstract
    • We present a new set of 96 Swedish multi-word expressions annotated with degree of (non-)compositionality. In contrast to most previous compositionality datasets we also consider syntactically complex constructions and publish a formal specification of each expression. This allows evaluation of computational models beyond word bigrams, which have so far been the norm. Finally, we use the annotations to evaluate a system for automatic compositionality estimation based on distributional semantics. Our analysis of the disagreements between human annotators and the distributional model reveal interesting questions related to the perception of compositionality, and should be informative to future work in the area.
  •  
7.
  • Kurfalı, Murathan, 1990-, et al. (författare)
  • Breaking the Narrative: Scene Segmentation through Sequential Sentence Classification
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we describe our submission to the Shared Task on Scene Segmentation (STSS). The shared task requires participants to segment novels into coherent segments, called scenes. We approach this as a sequential sentence classification task and offer a BERT-based solution with a weighted cross-entropy loss. According to the results, the proposed approach performs relatively well on the task as our model ranks first and second, in official in-domain and out-domain evaluations, respectively. However, the overall low performances (0.37 F1-score) suggest that there is still much room for improvement.
  •  
8.
  • Kurfalı, Murathan, 1990- (författare)
  • Contributions to Shallow Discourse Parsing : To English and beyond
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Discourse is a coherent set of sentences where the sequential reading of the sentences yields a sense of accumulation and readers can easily follow why one sentence follows another. A text that lacks coherence will most certainly fail to communicate its intended message and leave the reader puzzled as to why the sentences are presented together. However, formally accounting for the differences between a coherent and a non-coherent text still remains a challenge. Various theories propose that the semantic links that are inferred between sentences/clauses, known as discourse relations, are the building blocks of the discourse that can be connected to one another in various ways to form the discourse structure. This dissertation focuses on the former problem of discovering such discourse relations without aiming to arrive at any structure, a task known as shallow discourse parsing (SDP). Unfortunately, so far, SDP has been almost exclusively performed on the available gold annotations in English, leading to only limited insight into how the existing models would perform  in a low-resource scenario potentially involving any non-English language. The main objective of the current dissertation is to address these shortcomings and help extend SDP to the non-English territory. This aim is pursued through three different threads: (i) investigation of what kind of supervision is minimally required to perform SDP, (ii) construction of multilingual resources annotated at discourse-level, (iii) extension of well-known means to (SDP-wise) low-resource languages. An additional aim is to explore the feasibility of SDP as a probing task to evaluate discourse-level understanding abilities of modern language models is also explored.The dissertation is based on six papers grouped in three themes. The first two papers perform different subtasks of SDP through relatively understudied means. Paper I presents a simplified method to perform explicit discourse relation labeling without any feature-engineering whereas Paper II shows how implicit discourse relation recognition benefits from large amounts of unlabeled text through a novel method for distant supervision. The third and fourth papers describe two novel multilingual discourse resources, TED-MDB (Paper III) and three bilingual discourse connective lexicons (Paper IV). Notably, Ted-MDB is the first parallel corpus annotated for PDTB-style discourse relations covering six non-English languages. Finally, the last two studies directly deal with multilingual discourse parsing where Paper V reports the first results in cross-lingual implicit discourse relation recognition and Paper VI proposes a multilingual benchmark including certain discourse-level tasks that have not been explored in this context before. Overall, the dissertation allows for a more detailed understanding of what is required to extend shallow discourse parsing beyond English. The conventional aspects of traditional supervised approaches are replaced in favor of less knowledge-intensive alternatives which, nevertheless, achieve state-of-the-art performance in their respective settings. Moreover, thanks to the introduction of TED-MDB, cross-lingual SDP is explored in a zero-shot setting for the first time. In sum, the proposed methodologies and the constructed resources are among the earliest steps towards building high-performance multilingual, or non-English monolingual, shallow discourse parsers.
  •  
9.
  •  
10.
  •  
11.
  • Kurfali, Murathan, 1990- (författare)
  • Labeling Explicit Discourse Relations Using Pre-trained Language Models
  • 2020
  • Ingår i: Text, Speech, and Dialogue. - Cham : Springer. - 9783030583231 - 9783030583224 ; , s. 79-86
  • Bokkapitel (refereegranskat)abstract
    • Labeling explicit discourse relations is one of the most challenging sub-tasks of the shallow discourse parsing where the goal is to identify the discourse connectives and the boundaries of their arguments. The state-of-the-art models achieve slightly above 45% of F-score by using hand-crafted features. The current paper investigates the efficacy of the pre-trained language models in this task. We find that the pre-trained language models, when finetuned, are powerful enough to replace the linguistic features. We evaluate our model on PDTB 2.0 and report the state-of-the-art results in extraction of the full relation. This is the first time when a model outperforms the knowledge intensive models without employing any linguistic features.
  •  
12.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • Let’s be explicit about that : Distant supervision for implicit discourse relation classification via connective prediction
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • In implicit discourse relation classification, we want to predict the relation between adjacent sentences in the absence of any overt discourse connectives. This is challenging even for humans, leading to shortage of annotated data, a fact that makes the task even more difficult for supervised machine learning approaches. In the current study, we perform implicit discourse relation classification without relying on any labeled implicit relation. We sidestep the lack of data through explicitation of implicit relations to reduce the task to two sub-problems: language modeling and explicit discourse relation classification, a much easier problem. Our experimental results show that this method can even marginally outperform the state-of-the-art, in spite of being much simpler than alternative models of comparable performance. Moreover, we show that the achieved performance is robust across domains as suggested by the zero-shot experiments on a completely different domain. This indicates that recent advances in language modeling have made language models sufficiently good at capturing inter-sentence relations without the help of explicit discourse markers.
  •  
13.
  • Kurfali, Murathan, et al. (författare)
  • Noisy Parallel Corpus Filtering through Projected Word Embeddings
  • 2019
  • Ingår i: Proceedings of the Fourth Conference on Machine Translation (WMT). - : Association for Computational Linguistics. ; , s. 279-283
  • Konferensbidrag (refereegranskat)abstract
    • We present a very simple method for parallel text cleaning of low-resource languages, based on projection of word embeddings trained on large monolingual corpora in high-resource languages. In spite of its simplicity, we approach the strong baseline system in the downstream machine translation evaluation.
  •  
14.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • Probing Multilingual Language Models for Discourse
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • Pre-trained multilingual language models have become an important building block in multilingual natural language processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge across languages. This is done with a systematic evaluation on a broader set of discourse-level tasks than has been previously been assembled. We find that the XLM-RoBERTa family of models consistently show the best performance, by simultaneously being good monolingual models and degrading relatively little in a zero-shot setting. Our results also indicate that model distillation may hurt the ability of cross-lingual transfer of sentence representations, while language dissimilarity at most has a modest effect. We hope that our test suite, covering 5 tasks with a total of 22 languages in 10 distinct families, will serve as a useful evaluation platform for multilingual performance at and beyond the sentence level. 
  •  
15.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • TED-MDB Lexicons : Tr-EnConnLex, Pt-EnConnLex
  • 2020
  • Ingår i: the First Workshop on Computational Approaches to Discourse. - Stroudsburg, PA, USA : Association for Computational Linguistics.
  • Konferensbidrag (refereegranskat)abstract
    • In this work, we present two new bilingual discourse connective lexicons, namely,for Turkish-English and European PortugueseEnglish created automatically using the existing discourse relation-aligned TED-MDB corpus. In their current form, the Pt-En lexiconincludes 95 entries, whereas the Tr-En lexiconcontains 133 entries. The lexicons constitutethe first step of a larger project of developing amultilingual discourse connective lexicon. 
  •  
16.
  •  
17.
  •  
18.
  • Kurfali, Murathan, 1990-, et al. (författare)
  • Zero-shot transfer for implicit discourse relation classification
  • 2019
  • Ingår i: 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue. - Stroudsburg, PA, USA : Association for Computational Linguistics. ; , s. 226-231
  • Konferensbidrag (refereegranskat)abstract
    • Automatically classifying the relation between sentences in a discourse is a challenging task, in particular when there is no overt expression of the relation. It becomes even more challenging by the fact that annotated training data exists only for a small number of languages, such as English and Chinese. We present a new system using zero-shot transfer learning for implicit discourse relation classification, where the only resource used for the target language is unannotated parallel text. This system is evaluated on the discourse-annotated TEDMDB parallel corpus, where it obtains good results for all seven languages using only English training data.
  •  
19.
  • Kutlu, Ferhat, et al. (författare)
  • Toward a shallow discourse parser for Turkish
  • 2023
  • Ingår i: Natural Language Engineering. - 1351-3249 .- 1469-8110.
  • Tidskriftsartikel (refereegranskat)abstract
    • One of the most interesting aspects of natural language is how texts cohere, which involves the pragmatic or semantic relations that hold between clauses (addition, cause-effect, conditional, similarity), referred to as discourse relations. A focus on the identification and classification of discourse relations appears as an imperative challenge to be resolved to support tasks such as text summarization, dialogue systems, and machine translation that need information above the clause level. Despite the recent interest in discourse relations in well-known languages such as English, data and experiments are still needed for typologically different and less-resourced languages. We report the most comprehensive investigation of shallow discourse parsing in Turkish, focusing on two main sub-tasks: identification of discourse relation realization types and the sense classification of explicit and implicit relations. The work is based on the approach of fine-tuning a pre-trained language model (BERT) as an encoder and classifying the encoded data with neural network-based classifiers. We firstly identify the discourse relation realization type that holds in a given text, if there is any. Then, we move on to the sense classification of the identified explicit and implicit relations. In addition to in-domain experiments on a held-out test set from the Turkish Discourse Bank (TDB 1.2), we also report the out-domain performance of our models in order to evaluate its generalization abilities, using the Turkish part of the TED Multilingual Discourse Bank. Finally, we explore the effect of multilingual data aggregation on the classification of relation realization type through a cross-lingual experiment. The results suggest that our models perform relatively well despite the limited size of the TDB 1.2 and that there are language-specific aspects of detecting the types of discourse relation realization. We believe that the findings are important both in providing insights regarding the performance of the modern language models in a typologically different language and in the low-resource scenario, given that the TDB 1.2 is 1/20th of the Penn Discourse TreeBank in terms of the number of total relations.
  •  
20.
  •  
21.
  • Wirén, Mats, 1954-, et al. (författare)
  • Annotating the Narrative: A Plot of Scenes, Events, Characters and Other Intriguing Elements
  • 2022
  • Ingår i: LIVE and LEARN. - Gothenburg : Department of Swedish, Multilingualism, Language Technology. - 9789187850837 ; , s. 161-164
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Analysis of narrative structure in prose fiction is a field which is gaining increased attention in NLP, and which potentially has many interesting and more far-reaching applications. This paper provides a summary and motivation of two different but interrelated strands of work that we have carried out in this field during the last years: on the one hand, principles and guidelines for annotation, and on the other, methods for automatic annotation. 
  •  
22.
  • Zeyrek, Deniz, et al. (författare)
  • TED Multilingual Discourse Bank (TED-MDB) : a parallel corpus annotated in the PDTB style
  • 2020
  • Ingår i: Language resources and evaluation. - : Springer Science and Business Media LLC. - 1574-020X .- 1574-0218. ; 54, s. 587-613
  • Tidskriftsartikel (refereegranskat)abstract
    • TED-Multilingual Discourse Bank, or TED-MDB, is a multilingual resource where TED-talks are annotated at the discourse level in 6 languages (English, Polish, German, Russian, European Portuguese, and Turkish) following the aims and principles of PDTB. We explain the corpus design criteria, which has three main features: the linguistic characteristics of the languages involved, the interactive nature of TED talks—which led us to annotate Hypophora, and the decision to avoid projection. We report our annotation consistency, and post-annotation alignment experiments, and provide a cross-lingual comparison based on corpus statistics.
  •  
23.
  • Östling, Robert, 1986-, et al. (författare)
  • Language Embeddings Sometimes Contain Typological Generalizations
  • 2023
  • Ingår i: Computational linguistics - Association for Computational Linguistics (Print). - 0891-2017 .- 1530-9312. ; 49:4, s. 1003-1051
  • Tidskriftsartikel (refereegranskat)abstract
    • To what extent can neural network models learn generalizations about language structure, and how do we find out what they have learned? We explore these questions by training neural models for a range of natural language processing tasks on a massively multilingual dataset of Bible translations in 1,295 languages. The learned language representations are then compared to existing typological databases as well as to a novel set of quantitative syntactic and morphological features obtained through annotation projection. We conclude that some generalizations are surprisingly close to traditional features from linguistic typology, but that most of our models, as well as those of previous work, do not appear to have made linguistically meaningful generalizations. Careful attention to details in the evaluation turns out to be essential to avoid false positives. Furthermore, to encourage continued work in this field, we release several resources covering most or all of the languages in our data: (1) multiple sets of language representations, (2) multilingual word embeddings, (3) projected and predicted syntactic and morphological features, (4) software to provide linguistically sound evaluations of language representations.
  •  
24.
  • Özer, Sibel, et al. (författare)
  • Linking discourse-level information and the induction of bilingual discourse connective lexicons
  • 2022
  • Ingår i: Semantic Web. - 1570-0844 .- 2210-4968. ; 13:6, s. 1081-1102
  • Tidskriftsartikel (refereegranskat)abstract
    • The single biggest obstacle in performing comprehensive cross-lingual discourse analysis is the scarcity of multilingual resources. The existing resources are overwhelmingly monolingual, compelling researchers to infer the discourse-level information in the target languages through error-prone automatic means. The current paper aims to provide a more direct insight into the cross-lingual variations in discourse structures by linking the annotated relations of the TED-Multilingual Discourse Bank, which consists of independently annotated six TED talks in seven different languages. It is shown that the linguistic labels over the relations annotated in the texts of these languages can be automatically linked with English with high accuracy, as verified against the relations of three diverse languages semi-automatically linked with relations over English texts. The resulting corpus has a great potential to reveal the divergences in local discourse relations, as well as leading to new resources, as exemplified by the induction of bilingual discourse connective lexicons.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-24 av 24

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy