SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Leonardis Ales) "

Sökning: WFRF:(Leonardis Ales)

  • Resultat 1-14 av 14
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Felsberg, Michael, et al. (författare)
  • The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results
  • 2015
  • Ingår i: Proceedings of the IEEE International Conference on Computer Vision. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781467383905 ; , s. 639-651
  • Konferensbidrag (refereegranskat)abstract
    • The Thermal Infrared Visual Object Tracking challenge 2015, VOTTIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply prelearned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Linköping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.
  •  
2.
  • Felsberg, Michael, 1974-, et al. (författare)
  • The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results
  • 2016
  • Ingår i: Computer Vision – ECCV 2016 Workshops. ECCV 2016.. - Cham : SPRINGER INT PUBLISHING AG. - 9783319488813 - 9783319488806 ; , s. 824-849
  • Konferensbidrag (refereegranskat)abstract
    • The Thermal Infrared Visual Object Tracking challenge 2016, VOT-TIR2016, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2016 is the second benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2016 challenge is similar to the 2015 challenge, the main difference is the introduction of new, more difficult sequences into the dataset. Furthermore, VOT-TIR2016 evaluation adopted the improvements regarding overlap calculation in VOT2016. Compared to VOT-TIR2015, a significant general improvement of results has been observed, which partly compensate for the more difficult sequences. The dataset, the evaluation kit, as well as the results are publicly available at the challenge website.
  •  
3.
  • Kristan, Matej, et al. (författare)
  • The Ninth Visual Object Tracking VOT2021 Challenge Results
  • 2021
  • Ingår i: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021). - : IEEE COMPUTER SOC. - 9781665401913 ; , s. 2711-2738
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2021 is the ninth annual tracker benchmarking activity organized by the VOT initiative. Results of 71 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in recent years. The VOT2021 challenge was composed of four sub-challenges focusing on different tracking domains: (i) VOT-ST2021 challenge focused on short-term tracking in RGB, (ii) VOT-RT2021 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2021 focused on long-term tracking, namely coping with target disappearance and reappearance and (iv) VOT-RGBD2021 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2021 dataset was refreshed, while VOT-RGBD2021 introduces a training dataset and sequestered dataset for winner identification. The source code for most of the trackers, the datasets, the evaluation kit and the results along with the source code for most trackers are publicly available at the challenge website(1).
  •  
4.
  • Kristan, Matej, et al. (författare)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • Ingår i: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
5.
  • Kristan, Matej, et al. (författare)
  • The tenth visual object tracking VOT2022 challenge results
  • 2023
  • Ingår i: Computer vision – ECCV 2022 workshops. - Cham : Springer. - 9783031250842 - 9783031250859 ; , s. 431-460
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2022 is the tenth annual tracker benchmarking activity organized by the VOT initiative. Results of 93 entries are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in recent years. The VOT2022 challenge was composed of seven sub-challenges focusing on different tracking domains: (i) VOT-STs2022 challenge focused on short-term tracking in RGB by segmentation, (ii) VOT-STb2022 challenge focused on short-term tracking in RGB by bounding boxes, (iii) VOT-RTs2022 challenge focused on “real-time” short-term tracking in RGB by segmentation, (iv) VOT-RTb2022 challenge focused on “real-time” short-term tracking in RGB by bounding boxes, (v) VOT-LT2022 focused on long-term tracking, namely coping with target disappearance and reappearance, (vi) VOT-RGBD2022 challenge focused on short-term tracking in RGB and depth imagery, and (vii) VOT-D2022 challenge focused on short-term tracking in depth-only imagery. New datasets were introduced in VOT-LT2022 and VOT-RGBD2022, VOT-ST2022 dataset was refreshed, and a training dataset was introduced for VOT-LT2022. The source code for most of the trackers, the datasets, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
6.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2013 challenge results
  • 2013
  • Ingår i: 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE. - 9781479930227 ; , s. 98-111
  • Konferensbidrag (refereegranskat)abstract
    • Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website(1).
  •  
7.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2014 Challenge Results
  • 2015
  • Ingår i: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II. - Cham : Springer. - 9783319161808 - 9783319161815 ; , s. 191-217
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge 2014, VOT2014, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 38 trackers are presented. The number of tested trackers makes VOT 2014 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2014 challenge that go beyond its VOT2013 predecessor are introduced: (i) a new VOT2014 dataset with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2013 evaluation methodology, (iii) a new unit for tracking speed assessment less dependent on the hardware and (iv) the VOT2014 evaluation toolkit that significantly speeds up execution of experiments. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://​votchallenge.​net).
  •  
8.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2015 challenge results
  • 2015
  • Ingår i: Proceedings 2015 IEEE International Conference on Computer Vision Workshops ICCVW 2015. - : IEEE. - 9780769557205 ; , s. 564-586
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website(1).
  •  
9.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2016 Challenge Results
  • 2016
  • Ingår i: COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II. - Cham : SPRINGER INT PUBLISHING AG. - 9783319488813 - 9783319488806 ; , s. 777-823
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment.
  •  
10.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2017 challenge results
  • 2017
  • Ingår i: 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017). - : IEEE. - 9781538610343 ; , s. 1949-1972
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative. Results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years. The evaluation included the standard VOT and other popular methodologies and a new "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The VOT2017 goes beyond its predecessors by (i) improving the VOT public dataset and introducing a separate VOT2017 sequestered dataset, (ii) introducing a realtime tracking experiment and (iii) releasing a redesigned toolkit that supports complex experiments. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
11.
  • Kristanl, Matej, et al. (författare)
  • The Seventh Visual Object Tracking VOT2019 Challenge Results
  • 2019
  • Ingår i: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE COMPUTER SOC. - 9781728150239 ; , s. 2206-2241
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
12.
  • Larsson, Fredrik, 1980- (författare)
  • Shape Based Recognition – Cognitive Vision Systems in Traffic Safety Applications
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Traffic accidents are globally the number one cause of death for people 15-29 years old and is among the top three causes for all age groups 5-44 years. Much of the work within this thesis has been carried out in projects aiming for (cognitive) driver assistance systems and hopefully represents a step towards improving traffic safety.The main contributions are within the area of Computer Vision, and more specifically, within the areas of shape matching, Bayesian tracking, and visual servoing with the main focus being on shape matching and applications thereof. The different methods have been demonstrated in traffic safety applications, such as  bicycle tracking, car tracking, and traffic sign recognition, as well as for pose estimation and robot control.One of the core contributions is a new method for recognizing closed contours, based on complex correlation of Fourier descriptors. It is shown that keeping the phase of Fourier descriptors is important. Neglecting the phase can result in perfect matches between intrinsically different shapes. Another benefit of keeping the phase is that rotation covariant or invariant matching is achieved in the same way. The only difference is to either consider the magnitude, for rotation invariant matching, or just the real value, for rotation covariant matching, of the complex valued correlation.The shape matching method has further been used in combination with an implicit star-shaped object model for traffic sign recognition. The presented method works fully automatically on query images with no need for regions-of-interests. It is shown that the presented method performs well for traffic signs that contain multiple distinct contours, while some improvement still is needed for signs defined by a single contour. The presented methodology is general enough to be used for arbitrary objects, as long as they can be defined by a number of regions.Another contribution has been the extension of a framework for learning based Bayesian tracking called channel based tracking. Compared to earlier work, the multi-dimensional case has been reformulated in a sound probabilistic way and the learning algorithm itself has been extended. The framework is evaluated in car tracking scenarios and is shown to give competitive tracking performance, compared to standard approaches, but with the advantage of being fully learnable.The last contribution has been in the field of (cognitive) robot control. The presented method achieves sufficient accuracy for simple assembly tasks by combining autonomous recognition with visual servoing, based on a learned mapping between percepts and actions. The method demonstrates that limitations of inexpensive hardware, such as web cameras and low-cost robotic arms, can be overcome using powerful algorithms.All in all, the methods developed and presented in this thesis can all be used for different components in a system guided by visual information, and hopefully represents a step towards improving traffic safety.
  •  
13.
  •  
14.
  • Rasolzadeh, Babak (författare)
  • Visual Attention in Active Vision Systems : Attending, Classifying and Manipulating Objects
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis has presented a computational model for the combination of bottom-up and top-down attentional mechanisms. Furthermore, the use for this model has been demonstrated in a variety of applications of machine and robotic vision. We have observed that an attentional mechanism is imperative in any active vision system, machine as well as biological, since it not only reduces the amount of information that needs to be further processed (for say recognition, action), but also by only processing the attended image regions, such tasks become more robust to large amounts of clutter and noise in the visual field.Using various feature channels such as color, orientation, texture, depth and symmetry, as input, the presented model is able with a pre-trained artificial neural network to modulate a saliency map for a particular top-down goal, e.g. visual search for a target object. More specifically it dynamically combines the unmodulated bottom-up saliency with the modulated top-down saliency, by means of a biologically and psychophysically motivated temporal differential equation. This way the system is for instance able to detect important bottom-up cues, even while in visual search mode (top-down) for a particular object.All the computational steps for yielding the final attentional map, that ranks regions in images according to their importance for the system, are shown to be biologically plausible. It has also been demonstrated that the presented attentional model facilitates tasks other than visual search. For instance, using the covert attentional peaks that the model returns, we can improve scene understanding and segmentation through clustering or scattering of the 2D/3D components of the scene, depending on the configuration of these attentional peaks and their relations to other attributes of the scene. More specifically this is performed by means of entropy optimization of the scence under varying cluster-configurations, i.e. different groupings of the various components of the scene.Qualitative experiments demonstrated the use of this attentional model on a robotic humanoid platform and in a real-time manner control the overt attention of the robot by specifying the saccadic movements of the robot head. These experiments also exposed another highly important aspect of the model; its temporal variability, as opposed to many other attentional (saliency) models that exclusively deal with static images. Here the dynamic aspects of the attentional mechanism proved to allow for a temporally varying trade-off between top-down and bottom-up influences depending on changes in the environment of the robot.The thesis has also lay forward systematic and quantitative large scale experiments on the actual benefits and uses of this kind of attentional model. To this end a simulated 2D environment was implemented, where the system could not “see” the entire environment and needed to perform overt shifts of attention (a simulated saccade) in order to perfom a visual search task for a pre-defined sought object. This allowed for a simple and rapid substitution of the core attentional-model of the system with comparative computational models designed by other researchers. Nine such contending models were tested and compared with the presented model, in a quantitative manner. Given certain asumptions these experiments showed that the attentional model presented in this work outperforms the other models in simple visualsearch tasks.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-14 av 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy