SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Lappin Shalom) "

Search: WFRF:(Lappin Shalom)

  • Result 1-10 of 31
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Algebraic Structures in Natural Language
  • 2022
  • Editorial collection (other academic/artistic)abstract
    • The rapid progress of deep learning in artificial intelligence, particularly in natural language processing, has raised serious questions about the role of classical symbolic algebraic systems in the representation and acquisition of linguistic knowledge. This volume brings together chapters by leading researchers in computational linguistics, cognitive science, and natural language processing on this set of issues. It offers a variety of views, and it presents leading edge work on the main topic.
  •  
2.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • A Compositional Bayesian Semantics for Natural Language
  • 2018
  • In: Proceedings of the First International Workshop on Language Cognition and Computational Models, COLING 2018, August 20, 2018 Santa Fe, New Mexico, USA. - : COLING. - 1525-2477. - 9781948087575
  • Conference paper (peer-reviewed)abstract
    • We propose a compositional Bayesian semantics that interprets declarative sentences in a natural language by assigning them probability conditions. These are conditional probabilities that estimate the likelihood that a competent speaker would endorse an assertion, given certain hypotheses. Our semantics is implemented in a functional programming language. It estimates the marginal probability of a sentence through Markov Chain Monte Carlo (MCMC) sampling of objects in vector space models satisfying specified hypotheses. We apply our semantics to examples with several predicates and generalised quantifiers, including higher-order quantifiers. It captures the vagueness of predication (both gradable and non-gradable), without positing a precise boundary for classifier application. We present a basic account of semantic learning based on our semantic system. We compare our proposal to other current theories of probabilistic semantics, and we show that it offers several important advantages over these accounts.
  •  
3.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • A Neural Model for Compositional Word Embeddings and Sentence Processing
  • 2022
  • In: Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, May 26, 2022, Dublin, Ireland. - Stroudsburg, PA : Association of Computational Linguistics. - 9781955917292
  • Conference paper (peer-reviewed)abstract
    • We propose a new neural model for word em- beddings, which uses Unitary Matrices as the primary device for encoding lexical information. It uses simple matrix multiplication to de- rive matrices for large units, yielding a sentence processing model that is strictly compositional, does not lose information over time steps, and is transparent, in the sense that word embed- dings can be analysed regardless of context. This model does not employ activation functions, and so the network is fully accessible to analysis by the methods of linear algebra at each point in its operation on an input sequence. We test it in two NLP agreement tasks and obtain rule like perfect accuracy, with greater stability than current state-of-the-art systems. Our proposed model goes some way towards offering a class of computationally powerful deep learning systems that can be fully understood and compared to human cognitive processes for natural language learning and representation.
  •  
4.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
  • 2022
  • In: EPTCS 366. Proceedings End-to-End Compositional Models of Vector-Based Semantics, NUI Galway, 15-16 August, 2022, edited by: Michael Moortgat and Gijs Wijnholds. - Waterloo, Australia : Open Publishing Association. - 2075-2180.
  • Conference paper (peer-reviewed)abstract
    • We show that both an LSTM and a unitary-evolution recurrent neural network (URN) can achieve encouraging accuracy on two types of syntactic patterns: context-free long distance agreement, and mildly context-sensitive cross serial dependencies. This work extends recent experiments on deeply nested context-free long distance dependencies, with similar results. URNs differ from LSTMs in that they avoid non-linear activation functions, and they apply matrix multiplication to word embeddings encoded as unitary matrices. This permits them to retain all information in the processing of an input string over arbitrary distances. It also causes them to satisfy strict compositionality. URNs constitute a significant advance in the search for explainable models in deep learning applied to NLP.
  •  
5.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • Bayesian Inference Semantics: A Modelling System and A Test Suite
  • 2019
  • In: Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM), 6-7 June 2019, Minneapolis, Minnesota, USA / Rada Mihalcea, Ekaterina Shutova, Lun-Wei Ku, Kilian Evang, Soujanya Poria (Editors). - Stroudsburg, PA : Association for Computational Linguistics. - 9781948087933
  • Conference paper (peer-reviewed)abstract
    • We present BIS, a Bayesian Inference Semantics, for probabilistic reasoning in natural language. The current system is based on the framework of Bernardy et al. (2018), but departs from it in important respects. BIS makes use of Bayesian learning for inferring a hypothesis from premises. This involves estimating the probability of the hypothesis, given the data supplied by the premises of an argument. It uses a syntactic parser to generate typed syntactic structures that serve as input to a model generation system. Sentences are interpreted compositionally to probabilistic programs, and the corresponding truth values are estimated using sampling methods. BIS successfully deals with various probabilistic semantic phenomena, including frequency adverbs, generalised quantifiers, generics, and vague predicates. It performs well on a number of interesting probabilistic reasoning tasks. It also sustains most classically valid inferences (instantiation, de Morgan’s laws, etc.). To test BIS we have built an experimental test suite with examples of a range of probabilistic and classical inference patterns.
  •  
6.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • Bayesian Inference Semantics for Natural Language
  • 2022
  • In: Probabilistic Approaches to Linguistic Theory / edited by Jean-Philippe Bernardy, Rasmus Blanck, Stergios Chatzikyriakidis, Shalom Lappin, Aleksandre Maskharashvili.. - Stanford : CSLI Publications. - 9781684000791 ; , s. 161-228
  • Book chapter (peer-reviewed)abstract
    • We present a Bayesian Inference Semantics for natural language, which computes the probability conditions of sentences compositionally, through semantic functions matching with the types of their syntactic constituents. This system captures the vagueness of classifier terms and scalar modifiers. It also offers a straightforward treatment of the sorites paradox. Our framework expresses probabilistic inferences, which rely on lexically encoded priors, and it captures the effect of informational update on these inferences, through Bayesian modelling. The central device with which we represent probabilistic interpretation is the assignment of measurable spaces to objects and properties. We estimate the probability of a predication by measuring the density of relevant objects in the space of the property that the predicate denotes. We explore two alternative models for the priors. The first one is based on Gaussian distributions, but it exhibits computational intractability with some cases of Monte Carlo sampling. The second is based on uniform densities, and in a number of important instances, it allows us to avoid Monte Carlo sampling. We construct a test suite to illustrate the range of syntactic and semantic constructions, and the inference types, that our system covers.
  •  
7.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • Learning Syntactic Agreement with Deep Neural Networks
  • 2017
  • In: Israel Seminar on Computational Linguistics, September 25, 2017.
  • Conference paper (other academic/artistic)abstract
    • We consider the extent to which different deep neural network (DNN) con- figurations can learn syntactic relations, by taking up Linzen et al.’s (2016) work on subject-verb agreement with LSTM RNNs. We test their methods on a much larger corpus than they used (a ∼24 million example part of the WaCky corpus, instead of their ∼1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, mem- ory size, drop out rate, and lexical embedding dimension size). We also try out our own unsupervised DNN language model. Our results are broadly compat- ible with those that Linzen et al. report. However, we discovered some inter- esting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has significant consequences for our understanding of the way in which DNNs represent syntactic information.
  •  
8.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • Predicates as Boxes in Bayesian Semantics for Natural Language
  • 2019
  • In: Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa 2019), 30 September-2 October, 2019, Turku, Finland / Mareike Hartmann, Barbara Plank (Editors). - Linköping : Linköping University Electronic Press. - 1650-3686 .- 1650-3740. - 9789179299958
  • Conference paper (peer-reviewed)abstract
    • In this paper, we present a Bayesian approach to natural language semantics. Our main focus is on the inference task in an environment where judgments require probabilistic reasoning. We treat nouns, verbs, adjectives, etc. as unary predicates, and we model them as boxes in a bounded domain. We apply Bayesian learning to satisfy constraints expressed as premises. In this way we construct a model, by specifying boxes for the predicates. The probability of the hypothesis (the conclusion) is evaluated against the model that incorporates the premises as constraints.
  •  
9.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • The Influence of Context on Sentence Acceptability Judgements
  • 2018
  • In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers). Melbourne, Australia, July 15 - 20, 2018. - Stroudsburg, PA : Association of Computational Linguistics. - 9781948087346
  • Conference paper (peer-reviewed)abstract
    • We investigate the influence that document context exerts on human acceptability judgements for English sentences, via two sets of experiments. The first compares ratings for sentences presented on their own with ratings for the same set of sentences given in their document contexts. The second assesses the accuracy with which two types of neural models — one that incorporates context during training and one that does not — predict these judgements. Our results indicate that: (1) context improves acceptability ratings for ill-formed sentences, but also reduces them for well-formed sentences; and (2) context helps unsupervised systems to model acceptability.
  •  
10.
  • Bernardy, Jean-Philippe, 1978, et al. (author)
  • Unitary Recurrent Networks: Algebraic and Linear Structures for Syntax
  • 2022
  • In: Shalom Lappin and Jean-Philippe Bernardy (Eds), Algebraic Structures in Natural Language. - Milton, Boca Raton : CRC Press. - 9781032066547 ; , s. 243-277
  • Book chapter (peer-reviewed)abstract
    • The emergence of powerful deep learning systems has largely displaced classical sym- bolic algebraic models of linguistic representation in computational linguistics. While deep neural networks have achieved impressive results across a wide variety of AI and NLP tasks, they have become increasingly opaque and inaccessible to a clear under- standing of how they acquire the generalisations that they extract from the data to which they apply. This is particularly true of BERT, and similar non-directional trans- formers. We study an alternative deep learning system, Unitary-Evolution Recurrent Neural Networks (URNs) (Arjovsky et al., 2016), which are strictly compositional in their combination of state matrices. As a result they are fully transparent. They can be understood entirely in terms of the linear algebraic operations that they apply to each input state matrix to obtain its output successor. We review these opera- tions in some detail, clarifying the compositional nature of URNs. We then present experimental evidence from three NLP tasks to show that these models achieve an encouraging level of precision in handling long distance dependencies. The learning required to solve these tasks involves acquiring and representing complex hierarchical syntactic structures.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 31

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view