SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Smeulders Arnold) "

Search: WFRF:(Smeulders Arnold)

  • Result 1-4 of 4
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Curry, Edward, et al. (author)
  • Partnership on AI, Data, and Robotics
  • 2022
  • In: Communications of the ACM. - : ASSOC COMPUTING MACHINERY. - 0001-0782 .- 1557-7317. ; 65:4, s. 54-55
  • Journal article (other academic/artistic)abstract
    • n/a
  •  
2.
  • Kristan, Matej, et al. (author)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • In: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Conference paper (peer-reviewed)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
3.
  • Kristanl, Matej, et al. (author)
  • The Seventh Visual Object Tracking VOT2019 Challenge Results
  • 2019
  • In: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE COMPUTER SOC. - 9781728150239 ; , s. 2206-2241
  • Conference paper (peer-reviewed)abstract
    • The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
4.
  • Tavakoli Targhi, Alireza, 1977- (author)
  • The Texture-Transform : An Operator for Texture Detection and Discrimination
  • 2009
  • Doctoral thesis (other academic/artistic)abstract
    • In this thesis we present contributions related to texture detection and discrimination to be used for analyzing real world images. Many computer vision applications can benefit from a fast and low dimensional texture descriptor. Several texture descriptors have been introduced and used for texture image classification and texture segmentation on images with a single or a mixture of textures. For evaluation of these descriptors a number of texture image databases (e.g. CURet, Photex, KTH-TIPS2, ALOT) have been introduced containing images of different types of natural and virtual texture samples. Classification and segmentation experiments have often been performed on such databases. In real world images we have a variety of textured and non textured objects with different backgrounds. Many of the existing texture descriptors (e.g. filter banks, textons) due to their nature fire on brightness edges. Therefore they are not always applicable for texture detection and discrimination in such real world images, especially indoor images which in general contain non textured structures mixed with textured objects. In the thesis we introduce a texture descriptor, the Texture-transform, with the following properties that are desirable for bottom-up processing in real-world applications: (i) It captures small-scale structure in terms of roughness or smoothness of the image patch. (ii) It provides a low dimensional output (usually just a single dimension) which is easy to store and perform calculations on. (iii) It generally does not fire on brightness edges. This is in contrast to for instance filters which tend to identify a strip around a brightness edge as a separate region. (iv) It has few parameters which need tuning. The most significant parameter that unavoidably appears is scale. It is here simply provided by the size of the local image patch. (v) It can be computed fast and used in real-time systems and easily be incorporated in multiple cue vision systems. Last but not least, it is extremely easy to implement, for example in just a few lines of Matlab. The Texture-Transform is derived in a manner different from other descriptors reviewed in this thesis, but related to other frequency based methods. The key idea is to investigate the variability of a window of an image by considering the singular values or eigenvalues of matrices formed directly from grey values of local patches. We show that these properties satisfy the requirements for many applications by extensive experiments in two main tests, one of detection and another of discrimination, as in [Kruizinga and Petkov, 1999]. We also demonstrate that the Texture-transform allows us to identify and segment out natural textures in images, without yielding too many spurious regions from brightness edges. In these experiments we perform comparisons with other descriptors of a similar low-dimensional type. Due to the nature of our descriptor it of course lacks invariance. Hence, it cannot by itself be used for classification, since the results do not carry over from one image to another. However, as a proof of concept we show experimentally that the detected textured regions can be used in a subsequent classification task. Invariance is not needed in all tasks of detection and discrimination, at least with regard to orientation and contrast, as we discuss and demonstrate in the thesis. As examples of real word applications, we show the function of the Texture-transform on detection of street plate names, visual attention, and vegetation segmentation. Moreover, we study the application of texture features to animal detection and also address learning the visual appearance of textured surfaces from very few training samples using a photometric stereo technique to artificially generate new samples.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-4 of 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view