SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Bohg Jeannette) "

Sökning: WFRF:(Bohg Jeannette)

  • Resultat 1-10 av 22
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bergström, Niklas, 1978-, et al. (författare)
  • Active Scene Analysis
  • 2010
  • Konferensbidrag (refereegranskat)
  •  
2.
  • Bergström, Niklas, et al. (författare)
  • Integration of Visual Cues for Robotic Grasping
  • 2009
  • Ingår i: COMPUTER VISION SYSTEMS, PROCEEDINGS. - Berlin : Springer-Verlag Berlin. - 9783642046667 ; , s. 245-254
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set, of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.
  •  
3.
  •  
4.
  • Bohg, Jeannette, et al. (författare)
  • Data-Driven Grasp Synthesis-A Survey
  • 2014
  • Ingår i: IEEE Transactions on robotics. - 1552-3098 .- 1941-0468. ; 30:2, s. 289-309
  • Tidskriftsartikel (refereegranskat)abstract
    • We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.
  •  
5.
  • Bohg, Jeannette, et al. (författare)
  • Grasping Familiar Objects using Shape Context
  • 2009
  • Ingår i: ICAR. - : IEEE. - 9781424448555 ; , s. 50-55
  • Konferensbidrag (refereegranskat)abstract
    • We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.
  •  
6.
  • Bohg, Jeannette, et al. (författare)
  • Interactive Perception : Leveraging Action in Perception and Perception in Action
  • 2017
  • Ingår i: IEEE Transactions on robotics. - : IEEE. - 1552-3098 .- 1941-0468. ; 33:6, s. 1273-1291
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.
  •  
7.
  • Bohg, Jeannette, et al. (författare)
  • Learning Action-Perception Cycles in Robotics A Question of Representations and Embodiment
  • 2015
  • Ingår i: PRAGMATIC TURN. - : MIT PRESS. - 9780262034326 ; , s. 309-320
  • Konferensbidrag (refereegranskat)abstract
    • Since the 1950s, robotics research has sought to build a general-purpose agent capable of autonomous, open-ended interaction with realistic, unconstrained environments. Cognition is perceived to be at the core of this process, yet understan#ding has been challenged because cognition is referred to differently within and across research areas, and is not clearly defined. The classic robotics approach is decomposition into functional modules which perform planning, reasoning, and problem solving or provide input to these mechanisms. Although advancements have been made and numerous success stories reported in specific niches, this systems-engineering approach has not succeeded in building such a cognitive agent. The emergence of an action-oriented paradigm oilers a new approach: action and perception are no longer separable into functional modules but must be considered in a complete loop. This chapter reviews work on different mechanisms for action-perception learning and discusses the role of embodiment in the design of the underlying representations and learning. It discusses the evaluation of agents and suggests the development of a new embodied Turing lest. Appropriate scenarios need to be devised in addition to current competitions, so that abilities can be tested over long time periods.
  •  
8.
  • Bohg, Jeannette, et al. (författare)
  • Learning grasping points with shape context
  • 2010
  • Ingår i: Robotics and Autonomous Systems. - : Elsevier BV. - 0921-8890 .- 1872-793X. ; 58:4, s. 362-377
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.
  •  
9.
  • Bohg, Jeannette, 1981-, et al. (författare)
  • Mind the Gap - Robotic Grasping under Incomplete Observation
  • 2011
  • Ingår i: 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011. - New York : IEEE. - 9781612843865 ; , s. 686-693
  • Konferensbidrag (refereegranskat)abstract
    • We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned. The proposed approach is based on the observation that many objects commonly in use in a service robotic scenario possess symmetries. We search for the optimal parameters of these symmetries given visibility constraints. Once found, the point cloud is completed and a surface mesh reconstructed. Quantitative experiments show that the predictions are valid approximations of the real object shape. By demonstrating the approach on two very different robotic platforms its generality is emphasized.
  •  
10.
  • Bohg, Jeannette, 1981- (författare)
  • Multi-Modal Scene Understanding for Robotic Grasping
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 22

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy