SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Xie Qianqian) "

Search: WFRF:(Xie Qianqian)

  • Result 1-4 of 4
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Li, Jianquan, et al. (author)
  • Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk
  • 2023
  • In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). - Stroudsburg, PA : Association for Computational Linguistics. - 9781959429722 ; , s. 7581-7596
  • Conference paper (peer-reviewed)abstract
    • Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation (NLG), has been widely used for dialogue systems, chatbots, text summarization, as well as AI-Generated Content (AIGC), e.g., idea generation, and scriptwriting. However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models. In this work, we aim to preliminarily test whether NLG can generate humor as humans do. We build the largest dataset consisting of numerous Chinese Comical Crosstalk scripts (called C3 in short), which is for a popular Chinese performing art called 'Xiangsheng' or '相声' since 1800s. We benchmark various generation approaches including training-from-scratch Seq2seq, fine-tuned middle-scale PLMs, and large-scale PLMs with and without fine-tuning. Moreover, we also conduct a human assessment, showing that 1) large-scale pretraining largely improves crosstalk generation quality; and 2) even the scripts generated from the best PLM is far from what we expect. We conclude humor generation could be largely improved using large-scale PLMs, but it is still in its infancy. The data and benchmarking code are publicly available in https://github.com/anonNo2/crosstalk-generation. © 2023 Association for Computational Linguistics.
  •  
2.
  • Wang, Benyou, et al. (author)
  • Pre-trained Language Models in Biomedical Domain : A Systematic Survey
  • 2024
  • In: ACM Computing Surveys. - New York, NY : Association for Computing Machinery (ACM). - 0360-0300 .- 1557-7341. ; 56:3
  • Research review (peer-reviewed)abstract
    • Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing tasks. This also benefits the biomedical domain: researchers from informatics, medicine, and computer science communities propose various PLMs trained on biomedical datasets, e.g., biomedical text, electronic health records, protein, and DNA sequences for various biomedical tasks. However, the cross-discipline characteristics of biomedical PLMs hinder their spreading among communities; some existing works are isolated from each other without comprehensive comparison and discussions. It is nontrivial to make a survey that not only systematically reviews recent advances in biomedical PLMs and their applications but also standardizes terminology and benchmarks. This article summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in downstream biomedical tasks. Particularly, we discuss the motivations of PLMs in the biomedical domain and introduce the key concepts of pre-trained language models. We then propose a taxonomy of existing biomedical PLMs that categorizes them from various perspectives systematically. Plus, their applications in biomedical downstream tasks are exhaustively discussed, respectively. Last, we illustrate various limitations and future trends, which aims to provide inspiration for the future research. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
  •  
3.
  • Xie, Qianqian, et al. (author)
  • Knowledge-enhanced Graph Topic Transformer for Explainable Biomedical Text Summarization
  • 2024
  • In: IEEE journal of biomedical and health informatics. - Piscataway, NJ : IEEE. - 2168-2194 .- 2168-2208. ; 8:4, s. 1836-1847
  • Journal article (peer-reviewed)abstract
    • Given the overwhelming and rapidly increasing volumes of the published biomedical literature, automatic biomedical text summarization has long been a highly important task. Recently, great advances in the performance of biomedical text summarization have been facilitated by pre-trained language models (PLMs) based on fine-tuning. However, existing summarization methods based on PLMs do not capture domain-specific knowledge. This can result in generated summaries with low coherence, including redundant sentences, or excluding important domain knowledge conveyed in the full-text document. Furthermore, the black-box nature of the transformers means that they lack explainability, i.e. it is not clear to users how and why the summary was generated. The domain-specific knowledge and explainability are crucial for the accuracy and transparency of biomedical text summarization methods. In this article, we aim to address these issues by proposing a novel domain knowledge-enhanced graph topic transformer (DORIS) for explainable biomedical text summarization. The model integrates the graph neural topic model and the domain-specific knowledge from the Unified Medical Language System (UMLS) into the transformer-based PLM, to improve the explainability and accuracy. Experimental results on four biomedical literature datasets show that our model outperforms existing state-of-the-art (SOTA) PLM-based summarization methods on biomedical extractive summarization. Furthermore, our use of graph neural topic modeling means that our model possesses the desirable property of being explainable, i.e. it is straightforward for users to understand how and why the model selects particular sentences for inclusion in the summary. The domain-specific knowledge helps our model to learn more coherent topics, to better explain the performance. © IEEE
  •  
4.
  • Yuan, Ye, et al. (author)
  • On the divergent effects of stress on the self-organizing nanostructure due to spinodal decomposition in duplex stainless steel
  • 2024
  • In: Materials Science & Engineering. - : Elsevier Ltd. - 0921-5093 .- 1873-4936. ; 898
  • Journal article (peer-reviewed)abstract
    • Duplex stainless steels suffer from serious embrittlement due to the self-organization of the nanostructure caused by phase separation (PS) in ferrite. As duplex stainless steel (DSS) components are often subjected to stress during service, the effect of elastic tensile stress (ETS) on phase separation (PS) in alloy 2507 has been investigated in this study. The alloy was aged at 400 and 450 °C for different times under an applied ETS. The nanostructure evolution and mechanical properties were analyzed using small-angle neutron scattering and analytical transmission electron microscopy as well as Vickers-hardness and nanoindentation measurements. The results show that the applied ETS can suppress spinodal decomposition (SD) in ferrite in DSS 2507, which may be due to that ETS increases the critical compositional fluctuation wavelength for SD and thus increases the diffusion distance and barrier to the occurrence of SD. The suppressive effect is more obvious at longer aging time for the same level of applied stresses. It is also indicated that the suppressive effect of ETS seems to change non-monotonically with stress levels and the medium stress level is likely to have the largest suppressive effect. The suppressive effect of ETS on PS may significantly delay the embrittlement and extend the service longevity of DSS components within the miscibility gap. These results further the understanding of the effect of ETS on PS and elastic stress should be considered in the configurations of computational simulations of PS and evaluation of the service longevity of DSS components.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-4 of 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view