SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:9781956792034 "

Sökning: L773:9781956792034

  • Resultat 1-7 av 7
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Di Stefano, Federica, et al. (författare)
  • Description logics with pointwise circumscription
  • 2023
  • Ingår i: Proceedings of the thirty-second international joint conference on artificial intelligence. - : International Joint Conferences on Artificial Intelligence. - 9781956792034 ; , s. 3167-3175
  • Konferensbidrag (refereegranskat)abstract
    • Circumscription is one of the most powerful ways to extend Description Logics (DLs) with non-monotonic reasoning features, albeit with huge computational costs and undecidability in many cases. In this paper, we introduce pointwise circumscription for DLs, which is not only intuitive in terms of knowledge representation, but also provides a sound approximation of classic circumscription and has reduced computational complexity. Our main idea is to replace the second-order quantification step of classic circumscription with a series of (pointwise) local checks on all domain elements and their immediate neighbourhood. Our main positive results are for ontologies in DLs ALCIO and ALCI: we prove that for TBoxes of modal depth 1 (i.e. without nesting of existential or universal quantifiers) standard reasoning problems under pointwise circumscription are (co)NEXPTIME-complete and EXPTIMEcomplete, respectively. The restriction of modal depth still yields a large class of ontologies useful in practice, and it is further justified by a strong undecidability result for pointwise circumscription with general TBoxes in ALCIO.
  •  
2.
  • Eiter, Thomas, et al. (författare)
  • A Logic-based Approach to Contrastive Explainability for Neurosymbolic Visual Question Answering
  • 2023
  • Ingår i: IJCAI International Joint Conference on Artificial Intelligence. - : International Joint Conferences on Artificial Intelligence. - 9781956792034 ; , s. 3668-3676
  • Konferensbidrag (refereegranskat)abstract
    • Visual Question Answering (VQA) is a well-known problem for which deep-learning is key. This poses a challenge for explaining answers to questions, the more if advanced notions like contrastive explanations (CEs) should be provided. The latter explain why an answer has been reached in contrast to a different one and are attractive as they focus on reasons necessary to flip a query answer. We present a CE framework for VQA that uses a neurosymbolic VQA architecture which disentangles perception from reasoning. Once the reasoning part is provided as logical theory, we use answer-set programming, in which CE generation can be framed as an abduction problem. We validate our approach on the CLEVR dataset, which we extend by more sophisticated questions to further demonstrate the robustness of the modular architecture. While we achieve top performance compared to related approaches, we can also produce CEs for explanation, model debugging, and validation tasks, showing the versatility of the declarative approach to reasoning.
  •  
3.
  • Gocht, Stephan, et al. (författare)
  • Certified CNF Translations for Pseudo-Boolean Solving (Extended Abstract)
  • 2023
  • Ingår i: Proceedings of the 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023. - 9781956792034 ; , s. 6436-6441
  • Konferensbidrag (refereegranskat)abstract
    • The dramatic improvements in Boolean satisfiability (SAT) solving since the turn of the millennium have made it possible to leverage conflict-driven clause learning (CDCL) solvers for many combinatorial problems in academia and industry, and the use of proof logging has played a crucial role in increasing the confidence that the results these solvers produce are correct. However, the fact that SAT proof logging is performed in conjunctive normal form (CNF) clausal format means that it has not been possible to extend guarantees of correctness to the use of SAT solvers for more expressive combinatorial paradigms, where the first step is an unverified translation of the input to CNF. In this work, we show how cutting-planes-based reasoning can provide proof logging for solvers that translate pseudo-Boolean (a.k.a. 0-1 integer linear) decision problems to CNF and then run CDCL. We are hopeful that this is just a first step towards providing a unified proof logging approach that will extend to maximum satisfiability (MaxSAT) solving and pseudo-Boolean optimization in general.
  •  
4.
  • Javed, Rana Tallal, et al. (författare)
  • Get out of the BAG! Silos in AI ethics education : unsupervised topic modeling analysis of global AI curricula (extended abstract)
  • 2023
  • Ingår i: Proceedings of the thirty-second international joint conference on artificial intelligence. - : International Joint Conferences on Artificial Intelligence. - 9781956792034 ; , s. 6905-6909
  • Konferensbidrag (refereegranskat)abstract
    • This study explores the topics and trends of teaching AI ethics in higher education, using Latent Dirichlet Allocation as the analysis tool. The analyses included 166 courses from 105 universities around the world. Building on the uncovered patterns, we distil a model of current pedagogical practice, the BAG model (Build, Assess, and Govern), that combines cognitive levels, course content, and disciplines. The study critically assesses the implications of this teaching paradigm and challenges practitioners to reflect on their practices and move beyond stereotypes and biases.
  •  
5.
  • Lamanna, Leonardo, et al. (författare)
  • Learning to Act for Perceiving in Partially Unknown Environments
  • 2023
  • Ingår i: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023). - : International Joint Conferences on Artificial Intelligence. - 9781956792034 ; , s. 5485-5493
  • Konferensbidrag (refereegranskat)abstract
    • Autonomous agents embedded in a physical environment need the ability to correctly perceive the state of the environment from sensory data. In partially observable environments, certain properties can be perceived only in specific situations and from certain viewpoints that can be reached by the agent by planning and executing actions. For instance, to understand whether a cup is full of coffee, an agent, equipped with a camera, needs to turn on the light and look at the cup from the top. When the proper situations to perceive the desired properties are unknown, an agent needs to learn them and plan to get in such situations. In this paper, we devise a general method to solve this problem by evaluating the confidence of a neural network online and by using symbolic planning. We experimentally evaluate the proposed approach on several synthetic datasets, and show the feasibility of our approach in a real-world scenario that involves noisy perceptions and noisy actions on a real robot.
  •  
6.
  • Vanhée, Loïs, Dr., et al. (författare)
  • Ethical by designer : how to grow ethical designers of artificial intelligence (extended abstract)
  • 2023
  • Ingår i: Proceedings of the thirty-second international joint conference on artificial intelligence. - : International Joint Conferences on Artificial Intelligence. - 9781956792034 ; , s. 6979-6984
  • Konferensbidrag (refereegranskat)abstract
    • Ethical concerns regarding Artificial Intelligence technology have fueled discussions around the ethics training received by its designers. Training designers for ethical behaviour, understood as habitual application of ethical principles in any situation, can make a significant difference in the practice of research, development, and application of AI systems. Building on interdisciplinary knowledge and practical experience from computer science, moral psychology, and pedagogy, we propose a functional way to provide this training.
  •  
7.
  • Yang, Wen-Chi, et al. (författare)
  • Safe Reinforcement Learning via Probabilistic Logic Shields
  • 2023
  • Ingår i: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023). - : International Joint Conferences on Artificial Intelligence. - 9781956792034 ; , s. 5739-5749
  • Konferensbidrag (refereegranskat)abstract
    • Safe Reinforcement learning (Safe RL) aims at learning optimal policies while staying safe. A popular solution to Safe RL is shielding, which uses a logical safety specification to prevent an RL agent from taking unsafe actions. However, traditional shielding techniques are difficult to integrate with continuous, end-to-end deep RL methods. To this end, we introduce Probabilistic Logic Policy Gradient (PLPG). PLPG is a model-based Safe RL technique that uses probabilistic logic programming to model logical safety constraints as differentiable functions. Therefore, PLPG can be seamlessly applied to any policy gradient algorithm while still providing the same convergence guarantees. In our experiments, we show that PLPG learns safer and more rewarding policies compared to other state-of-the-art shielding techniques. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-7 av 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy