SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Bohg Jeannette) srt2:(2010-2014)"

Search: WFRF:(Bohg Jeannette) > (2010-2014)

  • Result 1-14 of 14
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  •  
3.
  • Bohg, Jeannette, et al. (author)
  • Data-Driven Grasp Synthesis-A Survey
  • 2014
  • In: IEEE Transactions on robotics. - 1552-3098 .- 1941-0468. ; 30:2, s. 289-309
  • Journal article (peer-reviewed)abstract
    • We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.
  •  
4.
  • Bohg, Jeannette, et al. (author)
  • Learning grasping points with shape context
  • 2010
  • In: Robotics and Autonomous Systems. - : Elsevier BV. - 0921-8890 .- 1872-793X. ; 58:4, s. 362-377
  • Journal article (peer-reviewed)abstract
    • This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.
  •  
5.
  • Bohg, Jeannette, 1981-, et al. (author)
  • Mind the Gap - Robotic Grasping under Incomplete Observation
  • 2011
  • In: 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011. - New York : IEEE. - 9781612843865 ; , s. 686-693
  • Conference paper (peer-reviewed)abstract
    • We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned. The proposed approach is based on the observation that many objects commonly in use in a service robotic scenario possess symmetries. We search for the optimal parameters of these symmetries given visibility constraints. Once found, the point cloud is completed and a surface mesh reconstructed. Quantitative experiments show that the predictions are valid approximations of the real object shape. By demonstrating the approach on two very different robotic platforms its generality is emphasized.
  •  
6.
  • Bohg, Jeannette, 1981- (author)
  • Multi-Modal Scene Understanding for Robotic Grasping
  • 2011
  • Doctoral thesis (other academic/artistic)abstract
    • Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios.
  •  
7.
  • Bohg, Jeannette, et al. (author)
  • Strategies for Multi-Modal Scene Exploration
  • 2010
  • In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010). - 9781424466757 ; , s. 4509-4515
  • Conference paper (peer-reviewed)abstract
    • We propose a method for multi-modal scene exploration where initial object hypothesis formed by active visual segmentation are confirmed and augmented through haptic exploration with a robotic arm. We update the current belief about the state of the map with the detection results and predict yet unknown parts of the map with a Gaussian Process. We show that through the integration of different sensor modalities, we achieve a more complete scene model. We also show that the prediction of the scene structure leads to a valid scene representation even if the map is not fully traversed. Furthermore, we propose different exploration strategies and evaluate them both in simulation and on our robotic platform.
  •  
8.
  • Bohg, Jeannette, et al. (author)
  • Task-based Grasp Adaptation on a Humanoid Robot
  • 2012
  • In: Proceedings 10th IFAC Symposium on Robot Control. ; , s. 779-786
  • Conference paper (peer-reviewed)abstract
    • In this paper, we present an approach towards autonomous grasping of objects according to their category and a given task. Recent advances in the field of object segmentation and categorization as well as task-based grasp inference have been leveraged by integrating them into one pipeline. This allows us to transfer task-specific grasp experience between objects of the same category. The effectiveness of the approach is demonstrated on the humanoid robot ARMAR-IIIa.
  •  
9.
  •  
10.
  • Gratal, Xavi, et al. (author)
  • Visual servoing on unknown objects
  • 2012
  • In: Mechatronics (Oxford). - : Elsevier BV. - 0957-4158 .- 1873-4006. ; 22:4, s. 423-435
  • Journal article (peer-reviewed)abstract
    • We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.
  •  
11.
  • Johnson-Roberson, Matthew, et al. (author)
  • Attention-based Active 3D Point Cloud Segmentation
  • 2010
  • In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010). - 9781424466757 ; , s. 1165-1170
  • Conference paper (peer-reviewed)abstract
    • In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.
  •  
12.
  • Johnson-Roberson, Matthew, et al. (author)
  • Enhanced Visual Scene Understanding through Human-Robot Dialog
  • 2011
  • In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. - : IEEE. - 9781612844541 ; , s. 3342-3348
  • Conference paper (peer-reviewed)abstract
    • We propose a novel human-robot-interaction framework for robust visual scene understanding. Without any a-priori knowledge about the objects, the task of the robot is to correctly enumerate how many of them are in the scene and segment them from the background. Our approach builds on top of state-of-the-art computer vision methods, generating object hypotheses through segmentation. This process is combined with a natural dialog system, thus including a ‘human in the loop’ where, by exploiting the natural conversation of an advanced dialog system, the robot gains knowledge about ambiguous situations. We present an entropy-based system allowing the robot to detect the poorest object hypotheses and query the user for arbitration. Based on the information obtained from the human-robot dialog, the scene segmentation can be re-seeded and thereby improved. We present experimental results on real data that show an improved segmentation performance compared to segmentation without interaction.
  •  
13.
  •  
14.
  • Leon, Beatriz, et al. (author)
  • OpenGRASP : A Toolkit for Robot Grasping Simulation
  • 2010
  • In: Simulation, Modeling, and Programming for Autonomous Robots Second International Conference, SIMPAR 2010, Darmstadt, Germany, November 15-18, 2010. - Berlin / Heidelberg : Springer. - 9783642173189 ; , s. 109-120
  • Conference paper (peer-reviewed)abstract
    • Simulation is essential for different robotic research fields such as mobile robotics, motion planning and grasp planning. For grasping in particular, there are no software simulation packages, which provide a holistic environment that can deal with the variety of aspects associated with this problem. These aspects include development and testing of new algorithms, modeling of the environments and robots, including the modeling of actuators, sensors and contacts. In this paper, we present a new simulation toolkit for grasping and dexterous manipulation called OpenGRASP addressing those aspects in addition to extensibility, interoperability and public availability. OpenGRASP is based on a modular architecture, that supports the creation and addition of new functionality and the integration of existing and widely-used technologies and standards. In addition, a designated editor has been created for the generation and migration of such models. We demonstrate the current state of OpenGRASP’s development and its application in a grasp evaluation environment.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-14 of 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view