SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Erdem Aykut) "

Sökning: WFRF:(Erdem Aykut)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Felsberg, Michael, 1974-, et al. (författare)
  • The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results
  • 2016
  • Ingår i: Computer Vision – ECCV 2016 Workshops. ECCV 2016.. - Cham : SPRINGER INT PUBLISHING AG. - 9783319488813 - 9783319488806 ; , s. 824-849
  • Konferensbidrag (refereegranskat)abstract
    • The Thermal Infrared Visual Object Tracking challenge 2016, VOT-TIR2016, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2016 is the second benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2016 challenge is similar to the 2015 challenge, the main difference is the introduction of new, more difficult sequences into the dataset. Furthermore, VOT-TIR2016 evaluation adopted the improvements regarding overlap calculation in VOT2016. Compared to VOT-TIR2015, a significant general improvement of results has been observed, which partly compensate for the more difficult sequences. The dataset, the evaluation kit, as well as the results are publicly available at the challenge website.
  •  
2.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2016 Challenge Results
  • 2016
  • Ingår i: COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II. - Cham : SPRINGER INT PUBLISHING AG. - 9783319488813 - 9783319488806 ; , s. 777-823
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment.
  •  
3.
  • Tekden, Ahmet Ercan, 1994, et al. (författare)
  • Object and relation centric representations for push effect prediction
  • 2024
  • Ingår i: Robotics and Autonomous Systems. - 0921-8890. ; 174
  • Tidskriftsartikel (refereegranskat)abstract
    • Pushing is an essential non-prehensile manipulation skill used for tasks ranging from pre-grasp manipulation to scene rearrangement, reasoning about object relations in the scene, and thus pushing actions have been widely studied in robotics. The effective use of pushing actions often requires an understanding of the dynamics of the manipulated objects and adaptation to the discrepancies between prediction and reality. For this reason, effect prediction and parameter estimation with pushing actions have been heavily investigated in the literature. However, current approaches are limited because they either model systems with a fixed number of objects or use image-based representations whose outputs are not very interpretable and quickly accumulate errors. In this paper, we propose a graph neural network based framework for effect prediction and parameter estimation of pushing actions by modeling object relations based on contacts or articulations. Our framework is validated both in real and simulated environments containing different shaped multi-part objects connected via different types of joints and objects with different masses, and it outperforms image-based representations on physics prediction. Our approach enables the robot to predict and adapt the effect of a pushing action as it observes the scene. It can also be used for tool manipulation with never-seen tools. Further, we demonstrate 6D effect prediction in the lever-up action in the context of robot-based hard-disk disassembly.
  •  
4.
  • Barreiro, Anabela, et al. (författare)
  • Multi3Generation : Multitask, Multilingual, Multimodal Language Generation
  • 2022
  • Ingår i: Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. - : European Association for Machine Translation. ; , s. 345-346
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents the Multitask, Multilingual, Multimodal Language Generation COST Action – Multi3Generatio(CA18231), an interdisciplinary networof research groups working on different aspects of language generation. This "meta-paper" will serve as reference for citationof the Action in future publications. It presents the objectives, challenges and a the links for the achieved outcomes.
  •  
5.
  • Lloret, Elena, et al. (författare)
  • Multi3Generation : Multitask, Multilingual, and Multimodal Language Generation
  • 2023
  • Ingår i: Open Research Europe. - : European Commission. - 2732-5121. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • The purpose of this article is to highlight the critical importance of language generation today. In particular, language generation is explored from the following three aspects: multi-modality, multilinguality, and multitask, which all of them play crucial role for Natural Language Generation (NLG) community. We present the activities conducted within the Multi3Generation COST Action (CA18231), as well as current trends and future perspectives for multitask, multilingual and multimodal language generation.
  •  
6.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy