SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Sariel Sanem) "

Sökning: WFRF:(Sariel Sanem)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ak, Abdullah Cihan, et al. (författare)
  • Learning Failure Prevention Skills for Safe Robot Manipulation
  • 2023
  • Ingår i: IEEE Robotics and Automation Letters. - Piscataway, NJ : IEEE. - 2377-3766. ; 8:12, s. 7994-8001
  • Tidskriftsartikel (refereegranskat)abstract
    • Robots are more capable of achieving manipulation tasks for everyday activities than before. However, the safety of manipulation skills that robots employ is still an open problem. Considering all possible failures during skill learning increases the complexity of the process and restrains learning an optimal policy. Nonetheless, safety-focused modularity in the acquisition of skills has not been adequately addressed in previous works. For that purpose, we reformulate skills as base and failure prevention skills, where base skills aim at completing tasks and failure prevention skills aim at reducing the risk of failures to occur. Then, we propose a modular and hierarchical method for safe robot manipulation by augmenting base skills by learning failure prevention skills with reinforcement learning and forming a skill library to address different safety risks. Furthermore, a skill selection policy that considers estimated risks is used for the robot to select the best control policy for safe manipulation. Our experiments show that the proposed method achieves the given goal while ensuring safety by preventing failures. We also show that with the proposed method, skill learning is feasible and our safe manipulation tools can be transferred to the real environment © 2023 IEEE
  •  
2.
  • Akyol, Gamze, et al. (författare)
  • A Variational Graph Autoencoder for Manipulation Action Recognition and Prediction
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • Despite decades of research, understanding human manipulation activities is, and has always been, one of the most attractive and challenging research topics in computer vision and robotics. Recognition and prediction of observed human manipulation actions have their roots in the applications related to, for instance, human-robot interaction and robot learning from demonstration. The current research trend heavily relies on advanced convolutional neural networks to process the structured Euclidean data, such as RGB camera images. These networks, however, come with immense computational complexity to be able to process high dimensional raw data.Different from the related works, we here introduce a deep graph autoencoder to jointly learn recognition and prediction of manipulation tasks from symbolic scene graphs, instead of relying on the structured Euclidean data. Our network has a variational autoencoder structure with two branches: one for identifying the input graph type and one for predicting the future graphs. The input of the proposed network is a set of semantic graphs which store the spatial relations between subjects and objects in the scene. The network output is a label set representing the detected and predicted class types. We benchmark our new model against different state-of-the-art methods on two different datasets, MANIAC and MSRC-9, and show that our proposed model can achieve better performance. We also release our source code https://github.com/gamzeakyol/GNet.
  •  
3.
  • Inceoglu, Arda, et al. (författare)
  • FINO-Net : A Deep Multimodal Sensor Fusion Framework for Manipulation Failure Detection
  • 2021
  • Ingår i: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). - : IEEE. - 9781665417143 - 9781665417150 ; , s. 6841-6847
  • Konferensbidrag (refereegranskat)abstract
    • We need robots more aware of the unintended outcomes of their actions for ensuring safety. This can be achieved by an onboard failure detection system to monitor and detect such cases. Onboard failure detection is challenging with a limited set of onboard sensor setup due to the limitations of sensing capabilities of each sensor. To alleviate these challenges, we propose FINO-Net, a novel multimodal sensor fusion based deep neural network to detect and identify manipulation failures. We also introduce FAILURE, a multimodal dataset, containing 229 real-world manipulation data recorded with a Baxter robot. Our network combines RGB, depth and audio readings to effectively detect failures. Results indicate that fusing RGB with depth and audio modalities significantly improves the performance. FINO-Net achieves %98.60 detection accuracy on our novel dataset. Code and data are publicly available at https://github.com/ardai/fino-net.
  •  
4.
  • Inceoglu, Arda, et al. (författare)
  • Multimodal Detection and Classification of Robot Manipulation Failures
  • 2024
  • Ingår i: IEEE Robotics and Automation Letters. - Piscataway, NJ : IEEE. - 2377-3766. ; 9:2, s. 1396-1403
  • Tidskriftsartikel (refereegranskat)abstract
    • An autonomous service robot should be able to interact with its environment safely and robustly without requiring human assistance. Unstructured environments are challenging for robots since the exact prediction of outcomes is not always possible. Even when the robot behaviors are well-designed, the unpredictable nature of the physical robot-object interaction may lead to failures in object manipulation. In this letter, we focus on detecting and classifying both manipulation and post-manipulation phase failures using the same exteroception setup. We cover a diverse set of failure types for primary tabletop manipulation actions. In order to detect these failures, we propose FINO-Net (Inceoglu et al., 2021), a deep multimodal sensor fusion-based classifier network architecture. FINO-Net accurately detects and classifies failures from raw sensory data without any additional information on task description and scene state. In this work, we use our extended FAILURE dataset (Inceoglu et al., 2021) with 99 new multimodal manipulation recordings and annotate them with their corresponding failure types. FINO-Net achieves 0.87 failure detection and 0.80 failure classification F1 scores. Experimental results show that FINO-Net is also appropriate for real-time use. © 2016 IEEE.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4
Typ av publikation
konferensbidrag (2)
tidskriftsartikel (2)
Typ av innehåll
refereegranskat (4)
Författare/redaktör
Sariel, Sanem (4)
Aksoy, Eren, 1982- (2)
Ak, Abdullah Cihan (2)
Aksoy, Eren Erdal, 1 ... (2)
Inceoglu, Arda (2)
Akyol, Gamze (1)
Lärosäte
Högskolan i Halmstad (4)
Språk
Engelska (4)
Forskningsämne (UKÄ/SCB)
Teknik (4)
Naturvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy