SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Ghanimifard Mehdi 1984) "

Search: WFRF:(Ghanimifard Mehdi 1984)

  • Result 1-10 of 19
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bizzoni, Yuri, 1989, et al. (author)
  • Bigrams and BiLSTMs Two neural networks for sequential metaphor detection
  • 2018
  • In: Proceedings of the Workshop on Figurative Language Processing at NAACL HLT 2018. 6 June 2018 New Orleans, Louisiana. - New Orleans, Louisiana, USA : Association of Computational Linguistics (ACL). - 9781948087155
  • Conference paper (peer-reviewed)abstract
    • We present and compare two alternative deep neural architectures to perform word-level metaphor detection on text: a bi-LSTM model and a new structure based on recursive feedforward concatenation of the input. We discuss different versions of such models and the effect that input manipulation - specifically, reducing the length of sentences and introducing concreteness scores for words - have on their performance.
  •  
2.
  • Bizzoni, Yuri, 1989, et al. (author)
  • "Deep" Learning : Detecting Metaphoricity in Adjective-Noun Pairs
  • 2017
  • In: Proceedings of the Workshop on Stylistic Variation, EMNLP2017, Copenhagen, Denmark, September 7–11, 2017. - Copenhagen, Denmark : Association for Computational Linguistics (ACL). - 9781945626999
  • Conference paper (peer-reviewed)abstract
    • Metaphor is one of the most studied and widespread figures of speech and an essential element of individual style. In this paper we look at metaphor identification in Adjective-Noun pairs. We show that using a single neural network combined with pre-trained vector embeddings can outperform the state of the art in terms of accuracy. In specific, the approach presented in this paper is based on two ideas: a) transfer learning via using pre-trained vectors representing adjective noun pairs, and b) a neural network as a model of composition that predicts a metaphoricity score as output. We present several different architectures for our system and evaluate their performances. Variations on dataset size and on the kinds of embeddings are also investigated. We show considerable improvement over the previous approaches both in terms of accuracy and w.r.t the size of annotated training data.
  •  
3.
  • Cano Santín, José Miguel, 1990, et al. (author)
  • Fast visual grounding in interaction: bringing few-shot learning with neural networks to an interactive robot
  • 2020
  • In: Proceedings of Conference on Probability and Meaning (PaM-2020), Gothenburg, Sweden (online) / Christine Howes, Stergios Chatzikyriakidis, Adam Ek and Vidya Somashekarappa (eds.). - : Association for Computational Linguistics (ACL). - 2002-9764.
  • Conference paper (peer-reviewed)abstract
    • The major shortcomings of using neural networks with situated agents are that in incremental interaction very few learning examples are available and that their visual sensory representations are quite different from image caption datasets. In this work we adapt and evaluate a few-shot learning approach, Matching Networks (Vinyals et al., 2016), to conversational strategies of a robot interacting with a human tutor in order to efficiently learn to categorise objects that are presented to it and also investigate to what degree transfer learning from pre-trained models on images from different contexts can improve its performance. We discuss the implications of such learning on the nature of semantic representations the system has learned.
  •  
4.
  • Cano Santín, José Miguel, 1990, et al. (author)
  • Interactive visual grounding with neural networks
  • 2019
  • In: Proceedings of LondonLogue - Semdial 2019: The 23rd Workshop on the Semantics and Pragmatics of Dialogue, London, 4-6 September 2019. - London, UK : Queen Mary University of London. - 2308-2275.
  • Conference paper (peer-reviewed)abstract
    • Training strategies for neural networks are not suitable for real time human-robot interaction. Few-shot learning approaches have been developed for low resource scenarios but without the usual teacher/learner supervision. In this work we present a combination of both: a situated dialogue system to teach object names to a robot from its camera images using Matching Networks (Vinyals et al., 2016). We compare the performance of the system with transferred learning from pre-trained models and different conversational strategies with a human tutor.
  •  
5.
  • Dobnik, Simon, 1977, et al. (author)
  • Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models
  • 2018
  • In: Proceedings of SpLU 2018 at NAACL-HLT 2018, June 6, 2018 New Orleans, Louisiana / Parisa Kordjamshidi, Archna Bhatia, James Pustejovsky, Marie-Francine Moens (eds.). - New Orleans, Louisiana, USA : Association of Computational Linguistics (ACL). - 9781948087216
  • Conference paper (peer-reviewed)abstract
    • The challenge for computational models of spatial descriptions for situated dialogue systems is the integration of information from different modalities. The semantics of spatial descriptions are grounded in at least two sources of information: (i) a geometric representation of space and (ii) the functional interaction of related objects that. We train several neural language models on descriptions of scenes from a dataset of image captions and examine whether the functional or geometric bias of spatial descriptions reported in the literature is reflected in the estimated perplexity of these models. The results of these experiments have implications for the creation of models of spatial lexical semantics for human-robot dialogue systems. Furthermore, they also provide an insight into the kinds of the semantic knowledge captured by neural language models trained on spatial descriptions, which has implications for image captioning systems.
  •  
6.
  • Dobnik, Simon, 1977, et al. (author)
  • Language, Action and Perception (APL-ESSLLI): Lecture Notes of a Course in Language and Computation
  • 2019
  • In: ESSLLI 2019, 31 European Summer School on Logic, Language and Information, 5 -6 August 2019, Riga, Latvia. - Riga, Latvia : University of Latvia.
  • Conference paper (other academic/artistic)abstract
    • The course gives a survey of theory and practical computational implementations of how natural language interacts with the physical world through action and perception. We will look at topics such as semantic theories and computational approaches to modelling natural language, action and perception (grounding), situated dialogue systems, integrated robotic systems, grounding of language in action and perception, generation and interpretation of scene descriptions from images and videos, spatial cognition, and others.
  •  
7.
  • Dobnik, Simon, 1977, et al. (author)
  • Spatial descriptions on a functional-geometric spectrum: the location of objects
  • 2020
  • In: Spatial Cognition XII: Proceedings of the 12th International Conference, Spatial Cognition 2020, Riga, Latvia, August 26–28, 2020. - Cham, Switzerland : Springer Nature Switzerland AG. - 0302-9743 .- 1611-3349. - 9783030579821
  • Conference paper (peer-reviewed)abstract
    • Experimental research on spatial descriptions shows that their semantics are dependent on several modalities, among others (i) a geometric representation of space ("where", geometric knowledge) and (ii) dynamic kinematic routines between objects that are related ("what", functional knowledge). In this paper we examine whether geometric and functional bias of spatial relations is also reflected in large corpora of images and their corresponding descriptions. In particular, we examine whether the variation in object locations in the usage of a relation is a predictor of that relation’s functional or geometric bias. Previous experimental psycho-linguistic work has examined the bias of some spatial relations, however our corpus-based computational analysis allows us to examine the bias of spatial relations and verbs beyond those that have been tested experimentally. Our findings have also implications for building computational image descriptions systems as we demonstrate what kind of representational knowledge is required to model spatial relations contained in them.
  •  
8.
  • Ek, Adam, 1990, et al. (author)
  • Synthetic Propaganda Embeddings to Train a Linear Projection
  • 2019
  • In: Proceedings of The 2nd Workshop on NLP for Internet Freedom : Censorship, Disinformation, and Propaganda, November 4, 2019, Hong Kong / Anna Feldman, Giovanni Da San Martino, Alberto Barrón-Cedeño, Chris Brew, Chris Leberknight, Preslav Nakov (Editors). - Stroudsburg, PA : Association for Computational Linguistics. - 9781950737895
  • Conference paper (peer-reviewed)abstract
    • This paper presents a method of detecting fine-grained categories of propaganda in text. Given a sentence, our method aims to identify a span of words and predict the type of propaganda used. To detect propaganda, we explore a method for extracting features of propaganda from contextualized embeddings without fine-tuning the large parameters of the base model. We show that by generating synthetic embeddings we can train a linear function with ReLU activation to extract useful labeled embeddings from an embedding space generated by a general-purpose language model. We also introduce an inference technique to detect continuous spans in sequences of propaganda tokens in sentences. A result of the ensemble model is submitted to the first shared task in fine-grained propaganda detection at NLP4IF as Team Stalin. In this paper, we provide additional analysis regarding our method of detecting spans of propaganda with synthetically generated representations.
  •  
9.
  • Ghanimifard, Mehdi, 1984, et al. (author)
  • Enriching Word-sense Embeddings with Translational Context
  • 2015
  • In: Proceedings of Recent Advances in Natural Language Processing / edited by Galia Angelova, Kalina Bontcheva, Ruslan Mitkov. International Conference, Hissar, Bulgaria 7–9 September, 2015. - 1313-8502. ; , s. 208-215
  • Conference paper (peer-reviewed)abstract
    • Vector-space models derived from corpora are an effective way to learn a representation of word meaning directly from data, and these models have many uses in practical applications. A number of unsupervised approaches have been proposed to automatically learn representations of word senses directly from corpora, but since these methods use no information but the words themselves, they sometimes miss distinctions that could be possible to make if more information were available. In this paper, we present a general framework that we call context enrichment that incorporates external information during the training of multi-sense vector-space models. Our approach is agnostic as to which external signal is used to enrich the context, but in this work we consider the use of translations as the source of enrichment. We evaluated the models trained using the translation-enriched context using several similarity benchmarks and a word analogy test set. In all our evaluations, the enriched model outperformed the purely word-based baseline soundly.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 19

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view