SwePub
Sök i LIBRIS databas

  Utökad sökning

WFRF:(Xie Wu)
 

Sökning: WFRF:(Xie Wu) > (2020-2024) > Can Language Models...

  • Li, JianquanThe Chinese University of Hong Kong, Shenzhen, China (författare)

Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk

  • Artikel/kapitelEngelska2023

Förlag, utgivningsår, omfång ...

  • Stroudsburg, PA :Association for Computational Linguistics,2023
  • printrdacarrier

Nummerbeteckningar

  • LIBRIS-ID:oai:DiVA.org:hh-52061
  • https://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-52061URI

Kompletterande språkuppgifter

  • Språk:engelska
  • Sammanfattning på:engelska

Ingår i deldatabas

Klassifikation

  • Ämneskategori:ref swepub-contenttype
  • Ämneskategori:kon swepub-publicationtype

Anmärkningar

  • Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation (NLG), has been widely used for dialogue systems, chatbots, text summarization, as well as AI-Generated Content (AIGC), e.g., idea generation, and scriptwriting. However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models. In this work, we aim to preliminarily test whether NLG can generate humor as humans do. We build the largest dataset consisting of numerous Chinese Comical Crosstalk scripts (called C3 in short), which is for a popular Chinese performing art called 'Xiangsheng' or '相声' since 1800s. We benchmark various generation approaches including training-from-scratch Seq2seq, fine-tuned middle-scale PLMs, and large-scale PLMs with and without fine-tuning. Moreover, we also conduct a human assessment, showing that 1) large-scale pretraining largely improves crosstalk generation quality; and 2) even the scripts generated from the best PLM is far from what we expect. We conclude humor generation could be largely improved using large-scale PLMs, but it is still in its infancy. The data and benchmarking code are publicly available in https://github.com/anonNo2/crosstalk-generation. © 2023 Association for Computational Linguistics.

Ämnesord och genrebeteckningar

Biuppslag (personer, institutioner, konferenser, titlar ...)

  • Wu, XiangboThe Chinese University of Hong Kong, Shenzhen, China (författare)
  • Liu, XiaokangThe Chinese University of Hong Kong, Shenzhen, China (författare)
  • Xie, QianqianUniversity of Manchester, Manchester, United Kingdom (författare)
  • Tiwari, Prayag,1991-Högskolan i Halmstad,Akademin för informationsteknologi(Swepub:hh)pratiw (författare)
  • Wang, BenyouThe Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen, China (författare)
  • The Chinese University of Hong Kong, Shenzhen, ChinaUniversity of Manchester, Manchester, United Kingdom (creator_code:org_t)

Sammanhörande titlar

  • Ingår i:Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Stroudsburg, PA : Association for Computational Linguistics, s. 7581-75969781959429722

Internetlänk

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy