SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:2002 9764 "

Sökning: L773:2002 9764

  • Resultat 1-14 av 14
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bloom Ström, Eva-Marie, 1967, et al. (författare)
  • Preparing a corpus of spoken Xhosa
  • 2023
  • Ingår i: Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD), Gothenburg and online 11–12 September 2023. - Gothenburg, Sweden : Association for Computational Linguistics. - 2002-9764. - 9798891760004
  • Konferensbidrag (refereegranskat)
  •  
2.
  • Cano Santín, José Miguel, 1990, et al. (författare)
  • Fast visual grounding in interaction: bringing few-shot learning with neural networks to an interactive robot
  • 2020
  • Ingår i: Proceedings of Conference on Probability and Meaning (PaM-2020), Gothenburg, Sweden (online) / Christine Howes, Stergios Chatzikyriakidis, Adam Ek and Vidya Somashekarappa (eds.). - : Association for Computational Linguistics (ACL). - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • The major shortcomings of using neural networks with situated agents are that in incremental interaction very few learning examples are available and that their visual sensory representations are quite different from image caption datasets. In this work we adapt and evaluate a few-shot learning approach, Matching Networks (Vinyals et al., 2016), to conversational strategies of a robot interacting with a human tutor in order to efficiently learn to categorise objects that are presented to it and also investigate to what degree transfer learning from pre-trained models on images from different contexts can improve its performance. We discuss the implications of such learning on the nature of semantic representations the system has learned.
  •  
3.
  • Cooper, Robin, 1947 (författare)
  • Neural TTR and possibilities for learning
  • 2017
  • Ingår i: CLASP Papers in Computational Linguistics .Volume 1: Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017), Gothenburg, 12–13 June 2017, edited by Simon Dobnik and Shalom Lappin. - Gothenburg : Department of Philosophy, Linguistics and Theory of Science. - 2002-9764.
  • Konferensbidrag (refereegranskat)
  •  
4.
  • Dobnik, Simon, 1977, et al. (författare)
  • In Search of Meaning and Its Representations for Computational Linguistics
  • 2022
  • Ingår i: Proceedings of the 2022 CLASP Conference on (Dis)embodiment, Gothenburg and online 15–16 September 2022 / Simon Dobnik, Julian Grove and Asad Sayeed (eds.). - : Association for Computational Linguistics. - 2002-9764. - 9781955917674
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.
  •  
5.
  • Dobnik, Simon, 1977, et al. (författare)
  • Modular mechanistic networks: On bridging mechanistic and phenomenological models with deep neural networks in natural language processing
  • 2017
  • Ingår i: CLASP Papers in Computational Linguistics: Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017), Gothenburg, 12–13 June 2017 (ISSN application pending). - Gothenburg, Sweden : CLASP: Centre for Language and Studies in Probability, FLOV, University of Gothenburg. - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • Natural language processing (NLP) can be done using either top-down (theory driven) and bottom-up (data driven) approaches, which we call mechanistic and phenomenological respectively. The approaches are frequently considered to stand in opposition to each other. Examining some recent approaches in deep learning we argue that deep neural networks incorporate both perspectives and, furthermore, that leveraging this aspect of deep learning may help in solving complex problems within language technology, such as modelling language and perception in the domain of spatial cognition.
  •  
6.
  • Dobnik, Simon, 1977, et al. (författare)
  • On the role of resources in the age of large language models
  • 2023
  • Ingår i: Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD), Gothenburg and online 11–12 September 2023 / Editors: Ellen Breitholtz, Shalom Lappin, Sharid Loáiciga, Nikolai Ilinykh, and Simon Dobnik. - Stroudsburg, PA : Association for Computational Linguistics. - 2002-9764. - 9798891760004
  • Konferensbidrag (refereegranskat)abstract
    • We evaluate the role of expert-based domain knowledge and resources in relation to training large language models by referring to our work on training and evaluating neural models, also in under-resourced scenarios which we believe also informs training models for “well-resourced” languages and domains. We argue that our community needs both large-scale datasets and small but high-quality data based on expert knowledge and that both activities should work hand-in-hand.
  •  
7.
  • Gander, Anna Jia, et al. (författare)
  • Micro-feedback as cues to understanding in communication
  • 2020
  • Ingår i: CLASP Papers in Computational Linguistics, Vol. 2. Dialogue and Perception - Extended papers from DaP2018 / edited by Christine Howes, Simon Dobnik and Ellen Breitholtz. - Gothenburg, Sweden : Centre for Linguistic Theory and Studies in Probability (CLASP). - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • Understanding in communication is studied in eight video-recorded spontaneous face-to-face dyadic first encounters conversations between Chinese and Swedish participants. Micro-feedback (unobtrusive expressions used in real-time conversation such as nods and yeah) related to sufficient understanding, misunderstanding, and non-understanding is investigated with regard to auditory and visual modalities, typical unimodal and multimodal expressions, and prosodic features. Results indicate that unimodal head movements exclusively show sufficient understanding. Misunderstanding and non-understanding are more related to multi-modal expressions than unimodal ones. For sufficient understanding, the most commonly used expressions are yeah, okay, m, nods, nod, smile, yeah + nods, chuckle, and yeah + nod (associated with a flat pitch contour). For misunderstanding, half of the multimodal expressions contain nods and yeah or a noun phrase associated with hesitation (and a falling pitch contour). For non-understanding, unimodal micro-feedback sorry, what do you mean, eyebrow raise, and gaze at and multimodal micro-feedback head forward or eyebrow raise combined with sorry, what, or huh are most frequently used, expressing uncertainty and eliciting further information (in association with a rising pitch contour).
  •  
8.
  •  
9.
  • Gregoromichelaki, Eleni, et al. (författare)
  • Actionism in syntax and semantics.
  • 2020
  • Ingår i: CLASP Papers in Computational Linguistics: Dialogue and Perception. - Extended papers from DaP2018 / edited by Christine Howes, SimonDobnik and Ellen Breitholtz. - Gothenburg : Centre for Linguistic Theory and Studies in Probability (CLASP). - 2002-9764.
  • Konferensbidrag (refereegranskat)
  •  
10.
  • Kelleher, John D., et al. (författare)
  • Referring to the recently seen: reference and perceptual memory in situated dialogue
  • 2020
  • Ingår i: CLASP Papers in Computational Linguistics, Vol. 2. Dialogue and Perception - Extended papers from DaP2018 / edited by Christine Howes, Simon Dobnik and Ellen Breitholtz. - Gothenburg, Sweden : CLASP, Centre for Language and Studies in Probability. - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • From theoretical linguistic and cognitive perspectives, situated dialogue systems are interesting as they provide ideal test-beds for investigating the interaction between language and perception. To date, how- ever much of the work on situated dialogue has focused resolving anaphoric or exophoric references. This paper opens up the question of how perceptual memory and linguistic references interact, and the challenges that this poses to computational models of perceptually grounded dialogue.
  •  
11.
  • Kelleher, John D., et al. (författare)
  • What is not where: the challenge of integrating spatial representations into deep learning architectures
  • 2017
  • Ingår i: CLASP Papers in Computational Linguistics: Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017), Gothenburg, 12–13 June 2017 / edited by Simon Dobnik and Shalom Lappin. - Gothenburg, Sweden : CLASP: Centre for Language and Studies in Probability, FLOV, University of Gothenburg. - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • This paper examines to what degree current deep learning architectures for image caption generation capture spatial language. On the basis of the evaluation of examples of generated captions from the literature we argue that systems capture what objects are in the image data but not where these objects are located: the captions generated by these systems are the output of a language model conditioned on the output of an object detector that cannot capture fine-grained location information. Although language models provide useful knowledge for image captions, we argue that deep learning image captioning architectures should also model geometric relations between objects.
  •  
12.
  • Mazzocconi, Chiara, et al. (författare)
  • Laughables and laughter perception: Preliminary investigations
  • 2020
  • Ingår i: CLASP Papers in Computational Linguistics, Vol. 2. Dialogue and Perception - Extended papers from DaP2018 / edited by Christine Howes, Simon Dobnik and Ellen Breitholtz. - Gothenburg : Centre for Linguistic Theory and Studies in Probability (CLASP). - 2002-9764.
  • Konferensbidrag (refereegranskat)
  •  
13.
  • Ranta, Aarne, 1963 (författare)
  • Explainable Machine Translation with Interlingual Trees as Certificates
  • 2017
  • Ingår i: CLASP Papers in Computational Linguistics. - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • Explainable Machine Translation (XMT) is an instance of Explainable Artificial Intelligence (XAI). An XAI program does not only return an output, but also an explanation of how the output was obtained. This helps the user to assess the reliability of the result, even if the AI program itself is a black box. As a promising candidate for explanations in MT, we consider interlingual meaning representations—abstract syntax trees in the sense of Grammatical Framework (GF). An abstract syntax tree encodes the translatable content in a way that enables accurate target language generation; the main problem is to find the right tree in parsing. This paper investigates a hybrid architecture where the tree is obtained by a black box robust parser, such as a neural dependency parser. As long as the parser returns a tree from which the target language is generated in a trusted way, the tree serves as an explanation that enables the user to assess the reliability of the whole chain of translation.
  •  
14.
  • Sayeed, Asad, 1980 (författare)
  • Towards an annotation framework for incremental scope specification update
  • 2017
  • Ingår i: CLASP Papers in Computational Linguistics, Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML 2017), Gothenburg, 12–13 June 2017 / Simon Dobnik and Shalom Lappin (eds.). - Gothenburg : University of Gothenburg. - 2002-9764.
  • Konferensbidrag (refereegranskat)abstract
    • This position paper outlines a framework for accommodating formal models of incremental scope processing in probabilistic terms through corpus annotation. Theories of scope ambiguity resolution encompass a number of overlapping and potentially conflicting strategies, including underspecification, quantifier raising, and employment of world-knowledge. Considering the conflicting evidence for the specific roles that these strategies play and the relationships between them, it may be possible to mediate their relationships through a probabilistic model that includes weights for each factor at each time step. This, however, requires the development of corpus resources. Here we propose the representation of scope specification operations as “decision tags” in future scope corpus annotation efforts.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-14 av 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy