SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Bohg Jeannette) "

Sökning: WFRF:(Bohg Jeannette)

  • Resultat 1-22 av 22
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bergström, Niklas, 1978-, et al. (författare)
  • Active Scene Analysis
  • 2010
  • Konferensbidrag (refereegranskat)
  •  
2.
  • Bergström, Niklas, et al. (författare)
  • Integration of Visual Cues for Robotic Grasping
  • 2009
  • Ingår i: COMPUTER VISION SYSTEMS, PROCEEDINGS. - Berlin : Springer-Verlag Berlin. - 9783642046667 ; , s. 245-254
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set, of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.
  •  
3.
  •  
4.
  • Bohg, Jeannette, et al. (författare)
  • Data-Driven Grasp Synthesis-A Survey
  • 2014
  • Ingår i: IEEE Transactions on robotics. - 1552-3098 .- 1941-0468. ; 30:2, s. 289-309
  • Tidskriftsartikel (refereegranskat)abstract
    • We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.
  •  
5.
  • Bohg, Jeannette, et al. (författare)
  • Grasping Familiar Objects using Shape Context
  • 2009
  • Ingår i: ICAR. - : IEEE. - 9781424448555 ; , s. 50-55
  • Konferensbidrag (refereegranskat)abstract
    • We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.
  •  
6.
  • Bohg, Jeannette, et al. (författare)
  • Interactive Perception : Leveraging Action in Perception and Perception in Action
  • 2017
  • Ingår i: IEEE Transactions on robotics. - : IEEE. - 1552-3098 .- 1941-0468. ; 33:6, s. 1273-1291
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.
  •  
7.
  • Bohg, Jeannette, et al. (författare)
  • Learning Action-Perception Cycles in Robotics A Question of Representations and Embodiment
  • 2015
  • Ingår i: PRAGMATIC TURN. - : MIT PRESS. - 9780262034326 ; , s. 309-320
  • Konferensbidrag (refereegranskat)abstract
    • Since the 1950s, robotics research has sought to build a general-purpose agent capable of autonomous, open-ended interaction with realistic, unconstrained environments. Cognition is perceived to be at the core of this process, yet understan#ding has been challenged because cognition is referred to differently within and across research areas, and is not clearly defined. The classic robotics approach is decomposition into functional modules which perform planning, reasoning, and problem solving or provide input to these mechanisms. Although advancements have been made and numerous success stories reported in specific niches, this systems-engineering approach has not succeeded in building such a cognitive agent. The emergence of an action-oriented paradigm oilers a new approach: action and perception are no longer separable into functional modules but must be considered in a complete loop. This chapter reviews work on different mechanisms for action-perception learning and discusses the role of embodiment in the design of the underlying representations and learning. It discusses the evaluation of agents and suggests the development of a new embodied Turing lest. Appropriate scenarios need to be devised in addition to current competitions, so that abilities can be tested over long time periods.
  •  
8.
  • Bohg, Jeannette, et al. (författare)
  • Learning grasping points with shape context
  • 2010
  • Ingår i: Robotics and Autonomous Systems. - : Elsevier BV. - 0921-8890 .- 1872-793X. ; 58:4, s. 362-377
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.
  •  
9.
  • Bohg, Jeannette, 1981-, et al. (författare)
  • Mind the Gap - Robotic Grasping under Incomplete Observation
  • 2011
  • Ingår i: 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011. - New York : IEEE. - 9781612843865 ; , s. 686-693
  • Konferensbidrag (refereegranskat)abstract
    • We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned. The proposed approach is based on the observation that many objects commonly in use in a service robotic scenario possess symmetries. We search for the optimal parameters of these symmetries given visibility constraints. Once found, the point cloud is completed and a surface mesh reconstructed. Quantitative experiments show that the predictions are valid approximations of the real object shape. By demonstrating the approach on two very different robotic platforms its generality is emphasized.
  •  
10.
  • Bohg, Jeannette, 1981- (författare)
  • Multi-Modal Scene Understanding for Robotic Grasping
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios.
  •  
11.
  • Bohg, Jeannette, et al. (författare)
  • Strategies for Multi-Modal Scene Exploration
  • 2010
  • Ingår i: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010). - 9781424466757 ; , s. 4509-4515
  • Konferensbidrag (refereegranskat)abstract
    • We propose a method for multi-modal scene exploration where initial object hypothesis formed by active visual segmentation are confirmed and augmented through haptic exploration with a robotic arm. We update the current belief about the state of the map with the detection results and predict yet unknown parts of the map with a Gaussian Process. We show that through the integration of different sensor modalities, we achieve a more complete scene model. We also show that the prediction of the scene structure leads to a valid scene representation even if the map is not fully traversed. Furthermore, we propose different exploration strategies and evaluate them both in simulation and on our robotic platform.
  •  
12.
  • Bohg, Jeannette, et al. (författare)
  • Task-based Grasp Adaptation on a Humanoid Robot
  • 2012
  • Ingår i: Proceedings 10th IFAC Symposium on Robot Control. ; , s. 779-786
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we present an approach towards autonomous grasping of objects according to their category and a given task. Recent advances in the field of object segmentation and categorization as well as task-based grasp inference have been leveraged by integrating them into one pipeline. This allows us to transfer task-specific grasp experience between objects of the same category. The effectiveness of the approach is demonstrated on the humanoid robot ARMAR-IIIa.
  •  
13.
  • Bohg, Jeannette, 1981-, et al. (författare)
  • TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS
  • 2009
  • Ingår i: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS. - : World Scientific Pub Co Pte Lt. - 0219-8436 .- 1793-6942. ; 6:3, s. 387-434
  • Tidskriftsartikel (refereegranskat)abstract
    • A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.
  •  
14.
  •  
15.
  • Gratal, Xavi, et al. (författare)
  • Visual servoing on unknown objects
  • 2012
  • Ingår i: Mechatronics (Oxford). - : Elsevier BV. - 0957-4158 .- 1873-4006. ; 22:4, s. 423-435
  • Tidskriftsartikel (refereegranskat)abstract
    • We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.
  •  
16.
  • Johnson-Roberson, Matthew, et al. (författare)
  • Attention-based Active 3D Point Cloud Segmentation
  • 2010
  • Ingår i: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010). - 9781424466757 ; , s. 1165-1170
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.
  •  
17.
  • Johnson-Roberson, Matthew, et al. (författare)
  • Enhanced Visual Scene Understanding through Human-Robot Dialog
  • 2011
  • Ingår i: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. - : IEEE. - 9781612844541 ; , s. 3342-3348
  • Konferensbidrag (refereegranskat)abstract
    • We propose a novel human-robot-interaction framework for robust visual scene understanding. Without any a-priori knowledge about the objects, the task of the robot is to correctly enumerate how many of them are in the scene and segment them from the background. Our approach builds on top of state-of-the-art computer vision methods, generating object hypotheses through segmentation. This process is combined with a natural dialog system, thus including a ‘human in the loop’ where, by exploiting the natural conversation of an advanced dialog system, the robot gains knowledge about ambiguous situations. We present an entropy-based system allowing the robot to detect the poorest object hypotheses and query the user for arbitration. Based on the information obtained from the human-robot dialog, the scene segmentation can be re-seeded and thereby improved. We present experimental results on real data that show an improved segmentation performance compared to segmentation without interaction.
  •  
18.
  •  
19.
  • Kokic, Mia, 1992-, et al. (författare)
  • Learning Task-Oriented Grasping From Human Activity Datasets
  • 2020
  • Ingår i: IEEE Robotics and Automation Letters. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2377-3766. ; 5:2, s. 3352-3359
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose to leverage a real-world, human activity RGB dataset to teach a robot Task-Oriented Grasping (TOG). We develop a model that takes as input an RGB image and outputs a hand pose and configuration as well as an object pose and a shape. We follow the insight that jointly estimating hand and object poses increases accuracy compared to estimating these quantities independently of each other. Given the trained model, we process an RGB dataset to automatically obtain the data to train a TOG model. This model takes as input an object point cloud and outputs a suitable region for task-specific grasping. Our ablation study shows that training an object pose predictor with the hand pose information (and vice versa) is better than training without this information. Furthermore, our results on a real-world dataset show the applicability and competitiveness of our method over state-of-the-art. Experiments with a robot demonstrate that our method can allow a robot to preform TOG on novel objects.
  •  
20.
  • Kokic, Mia, 1992-, et al. (författare)
  • Learning to Estimate Pose and Shape of Hand-Held Objects from RGB Images
  • 2019
  • Ingår i: IEEE International Conference on Intelligent Robots and Systems. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 3980-3987
  • Konferensbidrag (refereegranskat)abstract
    • We develop a system for modeling hand-object interactions in 3D from RGB images that show a hand which is holding a novel object from a known category. We design a Convolutional Neural Network (CNN) for Hand-held Object Pose and Shape estimation called HOPS-Net and utilize prior work to estimate the hand pose and configuration. We leverage the insight that information about the hand facilitates object pose and shape estimation by incorporating the hand into both training and inference of the object pose and shape as well as the refinement of the estimated pose. The network is trained on a large synthetic dataset of objects in interaction with a human hand. To bridge the gap between real and synthetic images, we employ an image-to-image translation model (Augmented CycleGAN) that generates realistically textured objects given a synthetic rendering. This provides a scalable way of generating annotated data for training HOPS-Net. Our quantitative experiments show that even noisy hand parameters significantly help object pose and shape estimation. The qualitative experiments show results of pose and shape estimation of objects held by a hand 'in the wild'.
  •  
21.
  • Leon, Beatriz, et al. (författare)
  • OpenGRASP : A Toolkit for Robot Grasping Simulation
  • 2010
  • Ingår i: Simulation, Modeling, and Programming for Autonomous Robots Second International Conference, SIMPAR 2010, Darmstadt, Germany, November 15-18, 2010. - Berlin / Heidelberg : Springer. - 9783642173189 ; , s. 109-120
  • Konferensbidrag (refereegranskat)abstract
    • Simulation is essential for different robotic research fields such as mobile robotics, motion planning and grasp planning. For grasping in particular, there are no software simulation packages, which provide a holistic environment that can deal with the variety of aspects associated with this problem. These aspects include development and testing of new algorithms, modeling of the environments and robots, including the modeling of actuators, sensors and contacts. In this paper, we present a new simulation toolkit for grasping and dexterous manipulation called OpenGRASP addressing those aspects in addition to extensibility, interoperability and public availability. OpenGRASP is based on a modular architecture, that supports the creation and addition of new functionality and the integration of existing and widely-used technologies and standards. In addition, a designated editor has been created for the generation and migration of such models. We demonstrate the current state of OpenGRASP’s development and its application in a grasp evaluation environment.
  •  
22.
  • Newbury, Rhys, et al. (författare)
  • Deep Learning Approaches to Grasp Synthesis: A Review
  • 2023
  • Ingår i: IEEE Transactions on robotics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1552-3098 .- 1941-0468. ; 39:5, s. 3994-4015
  • Tidskriftsartikel (refereegranskat)abstract
    • Grasping is the process of picking up an object by applying forces and torques at a set of contacts. Recent advances in deep learning methods have allowed rapid progress in robotic object grasping. In this systematic review, we surveyed the publications over the last decade, with a particular interest in grasping an object using all six degrees of freedom of the end-effector pose. Our review found four common methodologies for robotic grasping: sampling-based approaches, direct regression, reinforcement learning, and exemplar approaches In addition, we found two 'supporting methods' around grasping that use deep learning to support the grasping process, shape approximation, and affordances. We have distilled the publications found in this systematic review (85 papers) into ten key takeaways we consider crucial for future robotic grasping and manipulation research.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-22 av 22

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy