SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sahlgren Magnus) srt2:(2020-2024)"

Sökning: WFRF:(Sahlgren Magnus) > (2020-2024)

  • Resultat 1-10 av 16
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Gogoulou, Evangelia, et al. (författare)
  • Predicting treatment outcome from patient texts : The case of internet-based cognitive behavioural therapy
  • 2021
  • Ingår i: EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. - : Association for Computational Linguistics (ACL). - 9781954085022 ; , s. 575-580
  • Konferensbidrag (refereegranskat)abstract
    • We investigate the feasibility of applying standard text categorisation methods to patient text in order to predict treatment outcome in Internet-based cognitive behavioural therapy. The data set is unique in its detail and size for regular care for depression, social anxiety, and panic disorder. Our results indicate that there is a signal in the depression data, albeit a weak one. We also perform terminological and sentiment analysis, which confirm those results. © 2021 Association for Computational Linguistics
  •  
3.
  • Alkathiri, Abdul Aziz, et al. (författare)
  • Decentralized Word2Vec Using Gossip Learning
  • 2021
  • Ingår i: Proceedings of the 23<sup>rd</sup> Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
  • Konferensbidrag (refereegranskat)abstract
    • Advanced NLP models require huge amounts of data from various domains to produce high-quality representations. It is useful then for a few large public and private organizations to join their corpora during training. However, factors such as legislation and user emphasis on data privacy may prevent centralized orchestration and data sharing among these organizations. Therefore, for this specific scenario, we investigate how gossip learning, a massively-parallel, data-private, decentralized protocol, compares to a shared-dataset solution. We find that the application of Word2Vec in a gossip learning framework is viable. Without any tuning, the results are comparable to a traditional centralized setting, with a reduction in ground-truth similarity scores as low as 4.3%. Furthermore, the results are up to 54.8% better than independent local training.
  •  
4.
  • Berdicevskis, Aleksandrs, 1983, et al. (författare)
  • Superlim: A Swedish Language Understanding Evaluation Benchmark
  • 2023
  • Ingår i: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, December 6-10, 2023, Singapore / Houda Bouamor, Juan Pino, Kalika Bali (Editors). - Stroudsburg, PA : Association for Computational Linguistics. - 9798891760608
  • Konferensbidrag (refereegranskat)
  •  
5.
  • Carlsson, Fredrik, et al. (författare)
  • Fine-Grained Controllable Text Generation Using Non-Residual Prompting
  • 2022
  • Ingår i: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. - Stroudsburg, PA, USA : Association for Computational Linguistics. - 9781955917216 ; , s. 6837-6857
  • Konferensbidrag (refereegranskat)abstract
    • The introduction of immensely large Causal Language Models (CLMs) has rejuvenated the interest in open-ended text generation. However, controlling the generative process for these Transformer-based models is at large an unsolved problem. Earlier work has explored either plug-and-play decoding strategies, or more powerful but blunt approaches such as prompting. There hence currently exists a trade-off between fine-grained control, and the capability for more expressive high-level instructions. To alleviate this trade-off, we propose an encoder-decoder architecture that enables intermediate text prompts at arbitrary time steps. We propose a resource-efficient method for converting a pre-trained CLM into this architecture, and demonstrate its potential on various experiments, including the novel task of contextualized word inclusion. Our method provides strong results on multiple experimental settings, proving itself to be both expressive and versatile.
  •  
6.
  • Carlsson, Fredrik, et al. (författare)
  • Semantic Re-tuning with Contrastive Tension
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • Extracting semantically useful natural language sentence representations frompre-trained deep neural networks such as Transformers remains a challenge. Wefirst demonstrate that pre-training objectives impose a significant task bias ontothe final layers of models, with a layer-wise survey of the Semantic Textual Similarity (STS) correlations for multiple common Transformer language models. Wethen propose a new self-supervised method called Contrastive Tension (CT) tocounter such biases. CT frames the training objective as a noise-contrastive taskbetween the final layer representations of two independent models, in turn makingthe final layer representations suitable for feature extraction. Results from multiple common unsupervised and supervised STS tasks indicate that CT outperformsprevious State Of The Art (SOTA), and when combining CT with supervised datawe improve upon previous SOTA results with large margins.
  •  
7.
  • Dahlberg, Stefan, et al. (författare)
  • A Distributional Semantic Online Lexicon for Linguistic Explorations of Societies
  • 2023
  • Ingår i: Social Science Computer Review. - : SAGE Publications. - 0894-4393 .- 1552-8286. ; 41:2
  • Tidskriftsartikel (refereegranskat)abstract
    • Linguistic Explorations of Societies (LES) is an interdisciplinary research project with scholars from the fields of political science, computer science, and computational linguistics. The overarching ambition of LES has been to contribute to the survey-based comparative scholarship by compiling and analyzing online text data within and between languages and countries. To this end, the project has developed an online semantic lexicon, which allows researchers to explore meanings and usages of words in online media across a substantial number of geo-coded languages. The lexicon covers data from approximately 140 language-country combinations and is, to our knowledge, the most extensive free research resource of its kind. Such a resource makes it possible to critically examine survey translations and identify discrepancies in order to modify and improve existing survey methodology, and its unique features further enable Internet researchers to study public debate online from a comparative perspective. In this article, we discuss the social scientific rationale for using online text data as a complement to survey data, and present the natural language processing-based methodology behind the lexicon including its underpinning theory and practical modeling. Finally, we engage in a critical reflection about the challenges of using online text data to gauge public opinion and political behavior across the world.
  •  
8.
  • Dwibedi, Chinmay, 1987, et al. (författare)
  • Effect of self-managed lifestyle treatment on glycemic control in patients with type 2 diabetes
  • 2022
  • Ingår i: npj Digital Medicine. - : Nature Research. - 2398-6352. ; 5:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The lack of effective, scalable solutions for lifestyle treatment is a global clinical problem, causing severe morbidity and mortality. We developed a method for lifestyle treatment that promotes self-reflection and iterative behavioral change, provided as a digital tool, and evaluated its effect in 370 patients with type 2 diabetes (ClinicalTrials.gov identifier: NCT04691973). Users of the tool had reduced blood glucose, both compared with randomized and matched controls (involving 158 and 204 users, respectively), as well as improved systolic blood pressure, body weight and insulin resistance. The improvement was sustained during the entire follow-up (average 730 days). A pathophysiological subgroup of obese insulin-resistant individuals had a pronounced glycemic response, enabling identification of those who would benefit in particular from lifestyle treatment. Natural language processing showed that the metabolic improvement was coupled with the self-reflective element of the tool. The treatment is cost-saving because of improved risk factor control for cardiovascular complications. The findings open an avenue for self-managed lifestyle treatment with long-term metabolic efficacy that is cost-saving and can reach large numbers of people. © 2022, The Author(s).
  •  
9.
  • Ghoorchian, Kambiz, 1981-, et al. (författare)
  • GDTM: Graph-based Dynamic Topic Models
  • 2020
  • Ingår i: Progress in Artificial Intelligence. - : Springer Nature. - 2192-6352 .- 2192-6360. ; 9, s. 195-207
  • Tidskriftsartikel (refereegranskat)abstract
    • Dynamic Topic Modeling (DTM) is the ultimate solution for extracting topics from short texts generated in Online Social Networks (OSNs) like Twitter. A DTM solution is required to be scalable and to be able to account for sparsity in short texts and dynamicity of topics. Current solutions combine probabilistic mixture models like Dirichlet Multinomial or PitmanYor Process with approximate inference approaches like Gibbs Sampling and Stochastic Variational Inference to, respectively, account for dynamicity and scalability in DTM. However, these solutions rely on weak probabilistic language models, which do not account for sparsity in short texts. In addition, their inference is based on iterative optimization algorithms, which have scalability issues when it comes to DTM. We present GDTM, a single-pass graph-based DTM algorithm, to solve the problem. GDTM combines a context-rich and incremental feature representation model, called Random Indexing (RI), with a novel online graph partitioning algorithm to address scalability and dynamicity. In addition, GDTM uses a rich language modeling approach based on the Skip-gram technique to account for sparsity. We run multiple experiments over a large-scale Twitter dataset to analyze the accuracy and scalability of GDTM and compare the results with four state-of-the-art approaches. The results show that GDTM outperforms the best approach by 11% on accuracy and performs by an order of magnitude faster while creating 4 times better topic quality over standard evaluation metrics.
  •  
10.
  • Gogoulou, Evangelia, et al. (författare)
  • Cross-lingual Transfer of Monolingual Models
  • 2022
  • Ingår i: 2022 Language Resources and Evaluation Conference, LREC 2022. - : European Language Resources Association (ELRA). - 9791095546726 ; , s. 948-955
  • Konferensbidrag (refereegranskat)abstract
    • Recent studies in cross-lingual learning using multilingual models have cast doubt on the previous hypothesis that shared vocabulary and joint pre-training are the keys to cross-lingual generalization. We introduce a method for transferring monolingual models to other languages through continuous pre-training and study the effects of such transfer from four different languages to English. Our experimental results on GLUE show that the transferred models outperform an English model trained from scratch, independently of the source language. After probing the model representations, we find that model knowledge from the source language enhances the learning of syntactic and semantic knowledge in English. ©  licensed under CC-BY-NC-4.0.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 16
Typ av publikation
konferensbidrag (8)
tidskriftsartikel (7)
annan publikation (1)
Typ av innehåll
refereegranskat (15)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Sahlgren, Magnus (16)
Gogoulou, Evangelia (5)
Carlsson, Fredrik (4)
Boman, Magnus (2)
Nivre, Joakim, 1962- (2)
Öhman, Joey (2)
visa fler...
Isbister, Tim (2)
Börjeson, Love (2)
Nilsson, K. (1)
Girdzijauskas, Sarun ... (1)
Franzén, Stefan, 196 ... (1)
Olsson, Fredrik (1)
Ben Abdesslem, Fehmi (1)
Tahmasebi, Nina, 198 ... (1)
Lenci, Alessandro (1)
Paradis, Carita (1)
Adesam, Yvonne, 1975 (1)
Borin, Lars, 1957 (1)
Bouma, Gerlof, 1979 (1)
Forsberg, Markus, 19 ... (1)
Dannélls, Dana, 1976 (1)
Berdicevskis, Aleksa ... (1)
Morger, Felix (1)
Rosengren, Anders H. (1)
Skeppstedt, Maria, 1 ... (1)
Dürlich, Luise (1)
Malmsten, Martin (1)
Kerren, Andreas, 197 ... (1)
Volodina, Elena, 197 ... (1)
Kaldo, Viktor, Profe ... (1)
Alkathiri, Abdul Azi ... (1)
Giaretta, Lodovico, ... (1)
Karlgren, Jussi (1)
Guillou, Liane (1)
Persson, Sofie (1)
Andersson Schwarz, J ... (1)
Axelsson, Annika (1)
Axelsson, Sofia, 198 ... (1)
Holmberg, Sören, 194 ... (1)
Kurtz, Robin (1)
Lindahl, Anna, 1988 (1)
Rekathati, Faton (1)
Hengchen, Simon, 198 ... (1)
Dahlberg, Stefan (1)
Carlsson, Katarina S ... (1)
Isacsson, Nils (1)
Ylipää, Erik (1)
Bäckman, Malin (1)
Liu, Fangyu (1)
Verlinden, Severine (1)
visa färre...
Lärosäte
RISE (12)
Kungliga Tekniska Högskolan (6)
Göteborgs universitet (3)
Lunds universitet (2)
Linnéuniversitetet (2)
Uppsala universitet (1)
visa fler...
Linköpings universitet (1)
Mittuniversitetet (1)
Södertörns högskola (1)
visa färre...
Språk
Engelska (16)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (13)
Humaniora (3)
Samhällsvetenskap (2)
Medicin och hälsovetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy