SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Manhaeve Robin) "

Sökning: WFRF:(Manhaeve Robin)

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • De Raedt, Luc, 1964-, et al. (författare)
  • From Statistical Relational to Neuro-Symbolic Artificial Intelligence
  • 2021
  • Ingår i: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. - California : ijcai.org. ; , s. 4943-4950
  • Konferensbidrag (refereegranskat)abstract
    • Neural-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning. This survey identifies several parallels across seven different dimensions between these two fields. These cannot only be used to characterize and position neural-symbolic artificial intelligence approaches but also to identify a number of directions for further research.
  •  
2.
  • De Smet, Lennert, et al. (författare)
  • Neural Probabilistic Logic Programming in Discrete-Continuous Domains
  • 2023
  • Ingår i: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence. - : JMLR. ; , s. 529-538
  • Konferensbidrag (refereegranskat)abstract
    • Neural-symbolic AI (NeSy) allows neural net-works to exploit symbolic background knowledge in the form of logic. It has been shown to aid learning in the limited data regime and to facilitate inference on out-of-distribution data. Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory, which additionally allows learning under uncertainty. A major limitation of current probabilistic NeSy systems, such as DeepProbLog, is their restriction to finite probability distributions, i.e., discrete random variables. In contrast, deep probabilistic programming (DPP) excels in modelling and optimising continuous probability distributions. Hence, we introduce DeepSeaProbLog, a neural probabilistic logic programming language that incorporates DPP techniques into NeSy. Doing so results in the support of inference and learning of both discrete and continuous probability distributions under logical constraints. Our main contributions are 1) the semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a proven asymptotically unbiased learning algorithm, and 3) a series of experiments that illustrate the versatility of our approach.
  •  
3.
  • Manhaeve, Robin, et al. (författare)
  • Approximate Inference for Neural Probabilistic Logic Programming
  • 2021
  • Ingår i: Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning. - California : International Joint Conferences on Artificial Intelligence Organization. - 9781956792997 ; , s. 475-486
  • Konferensbidrag (refereegranskat)abstract
    • DeepProbLog is a neural-symbolic framework that integrates probabilistic logic programming and neural networks.It is realized by providing an interface between the probabilistic logic and the neural networks.Inference in probabilistic neural symbolic methods is hard, since it combines logical theorem proving with probabilistic inference and neural network evaluation.In this work, we make the inference more efficient by extending an approximate inference algorithm from the field of statistical-relational AI. Instead of considering all possible proofs for a certain query, the system searches for the best proof.However, training a DeepProbLog model using approximate inference introduces additional challenges, as the best proof is unknown at the start of training which can lead to convergence towards a local optimum.To be able to apply DeepProbLog on larger tasks, we propose: 1) a method for approximate inference using an A*-like search, called DPLA* 2) an exploration strategy for proving in a neural-symbolic setting, and 3) a parametric heuristic to guide the proof search.We empirically evaluate the performance and scalability of the new approach, and also compare the resulting approach to other neural-symbolic systems.The experiments show that DPLA* achieves a speed up of up to 2-3 orders of magnitude in some cases.
  •  
4.
  • Manhaeve, Robin, et al. (författare)
  • DeepProbLog : Neural Probabilistic Logic Programming
  • 2018
  • Ingår i: Advances in Neural Information Processing Systems 31 (NIPS 2018). - : Neural Information Processing Systems Foundation Inc.. ; , s. 3753-3760
  • Konferensbidrag (refereegranskat)abstract
    • We introduce DeepProbLog, a probabilistic logic programming language that in-corporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and sub-symbolic representations and inference, (ii) program induction, (iii) probabilistic(logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.
  •  
5.
  • Manhaeve, Robin, et al. (författare)
  • Neural probabilistic logic programming in DeepProbLog
  • 2021
  • Ingår i: Artificial Intelligence. - : Elsevier. - 0004-3702 .- 1872-7921. ; 298
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce DeepProbLog, a neural probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques of the underlying probabilistic logic programming language ProbLog can be adapted for the new language. We theoretically and experimentally demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) probabilistic (logic) programming, and (iv)(deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples. (C) 2021 Elsevier B.V. All rights reserved.
  •  
6.
  •  
7.
  • Marra, Giuseppe, et al. (författare)
  • From statistical relational to neurosymbolic artificial intelligence : A survey
  • 2024
  • Ingår i: Artificial Intelligence. - : Elsevier. - 0004-3702 .- 1872-7921. ; 328
  • Tidskriftsartikel (refereegranskat)abstract
    • This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence. Neurosymbolic artificial intelligence (NeSy) studies the integration of symbolic reasoning and neural networks, while statistical relational artificial intelligence (StarAI) focuses on integrating logic with probabilistic graphical models. This survey identifies seven shared dimensions between these two subfields of AI. These dimensions can be used to characterize different NeSy and StarAI systems. They are concerned with (1) the approach to logical inference, whether model or proofbased; (2) the syntax of the used logical theories; (3) the logical semantics of the systems and their extensions to facilitate learning; (4) the scope of learning, encompassing either parameter or structure learning; (5) the presence of symbolic and subsymbolic representations; (6) the degree to which systems capture the original logic, probabilistic, and neural paradigms; and (7) the classes of learning tasks the systems are applied to. By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning.
  •  
8.
  • Winters, Thomas, et al. (författare)
  • DeepStochLog : Neural Stochastic Logic Programming
  • 2022
  • Ingår i: Proceedings of the 36th AAAI Conference on Artificial Intelligence. - : AAAI Press. ; , s. 10090-10100
  • Konferensbidrag (refereegranskat)abstract
    • Recent advances in neural-symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural-symbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural-symbolic learning tasks. 
  •  
9.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy