SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Persson Andreas) ;pers:(Persson Andreas 1980)"

Sökning: WFRF:(Persson Andreas) > Persson Andreas 1980

  • Resultat 1-10 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Beeson, Patrick, et al. (författare)
  • An Ontology-Based Symbol Grounding System for Human-Robot Interaction
  • 2014
  • Ingår i: Artificial Intelligence for Human-Robot Interaction. - : AAAI Press. ; , s. 48-50
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.
  •  
2.
  • Can, Ozan Arkan, et al. (författare)
  • Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations
  • 2019
  • Ingår i: Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP). - : Association for Computational Linguistics. ; , s. 29-39
  • Konferensbidrag (refereegranskat)abstract
    • Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot’s world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.
  •  
3.
  • Längkvist, Martin, 1983-, et al. (författare)
  • Learning Generative Image Manipulations from Language Instructions
  • 2020
  • Konferensbidrag (refereegranskat)abstract
    • This paper studies whether a perceptual visual system can simulate human-like cognitive capabilities by training a computational model to predict the output of an action using language instruction. The aim is to ground action words such that an AI is able to generate an output image that outputs the effect of a certain action on an given object. The output of the model is a synthetic generated image that demonstrates the effect that the action has on the scene. This work combines an image encoder, language encoder, relational network, and image generator to ground action words, and then visualize the effect an action would have on a simulated scene. The focus in this work is to learn meaningful shared image and text representations for relational learning and object manipulation.
  •  
4.
  •  
5.
  • Persson, Andreas, 1980-, et al. (författare)
  • A Hash Table Approach for Large Scale Perceptual Anchoring
  • 2013
  • Ingår i: 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013). - 9781479906529 ; , s. 3060-3066
  • Konferensbidrag (refereegranskat)abstract
    • Perceptual anchoring deals with the problem of creating and maintaining the connection between percepts and symbols that refer to the same physical object. When approaching long term use of an anchoring framework which must cope with large sets of data, it is challenging to both efficiently and accurately anchor objects. An approach to address this problem is through visual perception and computationally efficient binary visual features. In this paper, we present a novel hash table algorithm derived from summarized binary visual features. This algorithm is later contextualized in an anchoring framework. Advantages of the internal structure of proposed hash tables are presented, as well as improvements through the use of hierarchies structured by semantic knowledge. Through evaluation on a larger set of data, we show that our approach is appropriate for efficient bottom-up anchoring, and performance-wise comparable to recently presented search tree algorithm.
  •  
6.
  • Persson, Andreas, 1980-, et al. (författare)
  • Embodied Affordance Grounding using Semantic Simulations and Neural-Symbolic Reasoning : An Overview of the PlayGround Project
  • 2022
  • Ingår i: AIC 2022. - : Technical University of Aachen.
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we present a synopsis of the PlayGround project. Through neural-symbolic learning and reasoning, the PlayGround project assumes that high-level concepts and reasoning processes can be used to advance both symbol grounding and object affordance inference. However, a prerequisite for reasoning about objects and their affordances is integrated object representations that concurrently maintain symbolic values (e.g., high-level concepts), and sub-symbolic features (e.g., spatial aspects of objects). Integrated representations that, preferably, should be based upon neural-symbolic computation such that neural-symbolic models can, subsequently, be used for high-level reasoning processes. Nevertheless, reasoning processes for symbol grounding and affordance inference often require multiple inference steps. Taking inspiration from the cognitive prospects in simulation semantics, the PlayGround project further presumes that these reasoning processes can be simulated by neural rendering complementary to high-level reasoning processes.
  •  
7.
  • Persson, Andreas, 1980-, et al. (författare)
  • Fast Matching of Binary Descriptors for Large-scale Applications in Robot Vision
  • 2016
  • Ingår i: International Journal of Advanced Robotic Systems. - Rijeka, Croatia : InTech. - 1729-8806 .- 1729-8814. ; 13
  • Tidskriftsartikel (refereegranskat)abstract
    • The introduction of computationally efficient binary feature descriptors has raised new opportunities for real-world robot vision applications. However, brute force feature matching of binary descriptors is only practical for smaller datasets. In the literature, there has therefore been an increasing interest in representing and matching binary descriptors more efficiently. In this article, we follow this trend and present a method for efficiently and dynamically quantizing binary descriptors through a summarized frequency count into compact representations (called fsum) for improved feature matching of binary pointfeatures. With the motivation that real-world robot applications must adapt to a changing environment, we further present an overview of the field of algorithms, which concerns the efficient matching of binary descriptors and which are able to incorporate changes over time, such as clustered search trees and bag-of-features improved by vocabulary adaptation. The focus for this article is on evaluation, particularly large scale evaluation, compared to alternatives that exist within the field. Throughout this evaluation it is shown that the fsum approach is both efficient in terms of computational cost and memory requirements, while retaining adequate retrieval accuracy. It is further shown that the presented algorithm is equally suited to binary descriptors of arbitrary type and that the algorithm is therefore a valid option for several types of vision applications.
  •  
8.
  • Persson, Andreas, 1980-, et al. (författare)
  • Fluent human–robot dialogues about grounded objects in home environments
  • 2014
  • Ingår i: Cognitive Computation. - : Springer. - 1866-9956 .- 1866-9964. ; 6:4, s. 914-927
  • Tidskriftsartikel (refereegranskat)abstract
    • To provide a spoken interaction between robots and human users, an internal representation of the robots sensory information must be available at a semantic level and accessible to a dialogue system in order to be used in a human-like and intuitive manner. In this paper, we integrate the fields of perceptual anchoring (which creates and maintains the symbol-percept correspondence of objects) in robotics with multimodal dialogues in order to achieve a fluent interaction between humans and robots when talking about objects. These everyday objects are located in a so-called symbiotic system where humans, robots, and sensors are co-operating in a home environment. To orchestrate the dialogue system, the IrisTK dialogue platform is used. The IrisTK system is based on modelling the interaction of events, between different modules, e.g. speech recognizer, face tracker, etc. This system is running on a mobile robot device, which is part of a distributed sensor network. A perceptual anchoring framework, recognizes objects placed in the home and maintains a consistent identity of the objects consisting of their symbolic and perceptual data. Particular effort is placed on creating flexible dialogues where requests to objects can be made in a variety of ways. Experimental validation consists of evaluating the system when many objects are possible candidates for satisfying these requests.
  •  
9.
  • Persson, Andreas, 1980-, et al. (författare)
  • I would like some food : anchoring objects to semantic web informationin human-robot dialogue interactions
  • 2013
  • Ingår i: Social Robotics. - Cham : Springer. - 9783319026749 - 9783319026756 ; , s. 361-370
  • Konferensbidrag (refereegranskat)abstract
    • Ubiquitous robotic systems present a number of interesting application areas for socially assistive robots that aim to improve quality of life. In particular the combination of smart home environments and relatively inexpensive robots can be a viable technological solutions for assisting elderly and persons with disability in their own home. Such services require an easy interface like spoken dialogue and the ability to refer to physical objects using semantic terms. This paper presents an implemented system combining a robot and a sensor network deployed in a test apartment in an elderly residence area. The paper focuses on the creation and maintenance (anchoring) of the connection between the semantic information present in the dialogue with perceived physical objects in the home. Semantic knowledge about concepts and their correlations are retrieved from on-line resources and ontologies, e.g. WordNet, and sensor information is provided by cameras distributed in the apartment.
  •  
10.
  • Persson, Andreas, 1980-, et al. (författare)
  • Learning Actions to Improve the Perceptual Anchoring of Object
  • 2017
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 3:76
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy