SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Bohg Jeannette) srt2:(2009)"

Sökning: WFRF:(Bohg Jeannette) > (2009)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bergström, Niklas, et al. (författare)
  • Integration of Visual Cues for Robotic Grasping
  • 2009
  • Ingår i: COMPUTER VISION SYSTEMS, PROCEEDINGS. - Berlin : Springer-Verlag Berlin. - 9783642046667 ; , s. 245-254
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set, of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.
  •  
2.
  • Bohg, Jeannette, et al. (författare)
  • Grasping Familiar Objects using Shape Context
  • 2009
  • Ingår i: ICAR. - : IEEE. - 9781424448555 ; , s. 50-55
  • Konferensbidrag (refereegranskat)abstract
    • We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.
  •  
3.
  • Bohg, Jeannette, 1981-, et al. (författare)
  • TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS
  • 2009
  • Ingår i: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS. - : World Scientific Pub Co Pte Lt. - 0219-8436 .- 1793-6942. ; 6:3, s. 387-434
  • Tidskriftsartikel (refereegranskat)abstract
    • A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy