SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sayeed Asad 1980) srt2:(2023)"

Sökning: WFRF:(Sayeed Asad 1980) > (2023)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Boholm, Max, 1982, et al. (författare)
  • Political dogwhistles and community divergence in semantic change
  • 2023
  • Ingår i: Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change. - : Association for Computational Linguistics.
  • Konferensbidrag (refereegranskat)abstract
    • We test whether the development of political dogwhistles can be observed using language change measures; specifically, does the development of a “hidden” message in a dogwhistle show up as differences in semantic change between communities over time? We take Swedish-language dogwhistles related to the on-going immigration debate and measure differences over time in their rate of semantic change between two Swedish-language community forums, Flashback and Familjeliv, the former representing an in-group for understanding the “hidden” meaning of the dogwhistles. We find that multiple measures are sensitive enough to detect differences over time, in that the meaning changes in Flashback over the relevant time period but not in Familjeliv. We also examine the sensitivity of multiple modeling approaches to semantic change in the matter of community divergence.
  •  
2.
  • Hong, Xudong, et al. (författare)
  • A surprisal oracle for active curriculum language modeling
  • 2023
  • Ingår i: Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, December 6-7, 2023, Singapore / Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, Ryan Cotterell (Editors). - : Association for Computational Linguistics. - 9781952148026
  • Konferensbidrag (refereegranskat)abstract
    • We investigate the viability of surprisal in an active curriculum learning framework to train transformer-based language models in the context of the BabyLM Challenge. In our approach, the model itself selects the data to label (active learning) and schedules data samples based on a surprisal oracle (curriculum learning). We show that the models learn across all the tasks and datasets evaluated, making the technique a promising alternative approach to reducing the data requirements of language models. Our code is available at https://github.com/asayeed/ActiveBaby
  •  
3.
  • Hong, X. D., et al. (författare)
  • Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences
  • 2023
  • Ingår i: Transactions of the Association for Computational Linguistics. - 2307-387X. ; 11, s. 565-581
  • Tidskriftsartikel (refereegranskat)abstract
    • Current work on image-based story generation suffers from the fact that the existing image sequence collections do not have coherent plots behind them. We improve visual story generation by producing a new image-grounded dataset, Visual Writing Prompts (VWP). VWP contains almost 2K selected sequences of movie shots, each including 5-10 images. The image sequences are aligned with a total of 12K stories which were collected via crowdsourcing given the image sequences and a set of grounded characters from the corresponding image sequence. Our new image sequence collection and filtering process has allowed us to obtain stories that are more coherent, diverse, and visually grounded compared to previous work. We also propose a character-based story generation model driven by coherence as a strong baseline. Evaluations show that our generated stories are more coherent, visually grounded, and diverse than stories generated with the current state-of-the-art model. Our code, image features, annotations and collected stories are available at .
  •  
4.
  • Hong, Xudong, et al. (författare)
  • Visual Coherence Loss for Coherent and Visually Grounded Story Generation
  • 2023
  • Ingår i: Proceedings of the Annual Meeting of the Association for Computational Linguistics. - 0736-587X. - 9781959429777
  • Konferensbidrag (refereegranskat)abstract
    • Local coherence is essential for text generation models. We identify two important aspects of local coherence within the visual storytelling task: (1) the model needs to represent re-occurrences of characters within the image sequence in order to mention them correctly in the story; (2) character representations should enable us to find instances of the same characters and distinguish different characters. In this paper, we propose a loss function inspired by a linguistic theory of coherence for learning image sequence representations. We further propose combining features from an object detector and a face detector to construct stronger character features. To evaluate visual grounding that current reference-based metrics do not measure, we propose a character matching metric to check whether the models generate referring expressions correctly for characters in input image sequences. Experiments on a visual story generation dataset show that our proposed features and loss function are effective for generating more coherent and visually grounded stories. Our code is available at https://github.com/vwprompt/vcl.
  •  
5.
  • Somashekarappa, Vidya, 1994, et al. (författare)
  • Neural Network Implementation of Gaze-Target Prediction for Human-Robot Interaction
  • 2023
  • Ingår i: IEEE International Workshop on Robot and Human Communication, RO-MAN. - 1944-9445 .- 1944-9437. - 9798350336702
  • Konferensbidrag (refereegranskat)abstract
    • Gaze cues, which initiate an action or behaviour, are necessary for a responsive and intuitive interaction. Using gaze to signal intentions or request an action during conversation is conventional. We propose a new approach to estimate gaze using a neural network architecture, while considering the dynamic patterns of real world gaze behaviour in natural interaction. The main goal is to provide foundation for robot/avatar to communicate with humans using natural multimodal-dialogue. Currently, robotic gaze systems are reactive in nature but our Gaze-Estimation framework can perform unified gaze detection, gaze-object prediction and object-landmark heatmap in a single scene, which paves the way for a more proactive approach. We generated 2.4M gaze predictions of various types of gaze in a more natural setting (GHIGaze). The predicted and categorised gaze data can be used to automate contextualized robotic gaze-tracking behaviour in interaction. We evaluate the performance on a manually-annotated data set and a publicly available gaze-follow dataset. Compared to previously reported methods our model performs better with the closest angular error to that of a human annotator. As future work, we propose an implementable gaze architecture for a social robot from Furhat robotics11https://furhatrobotics.com/
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy