SwePub
Sök i LIBRIS databas

  Utökad sökning

id:"swepub:oai:DiVA.org:liu-137882"
 

Sökning: id:"swepub:oai:DiVA.org:liu-137882" > Adaptive Decontamin...

LIBRIS Formathandbok  (Information om MARC21)
FältnamnIndikatorerMetadata
00003449naa a2200337 4500
001oai:DiVA.org:liu-137882
003SwePub
008170601s2016 | |||||||||||000 ||eng|
024a https://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1378822 URI
024a https://doi.org/10.1109/CVPR.2016.1592 DOI
040 a (SwePub)liu
041 a engb eng
042 9 SwePub
072 7a ref2 swepub-contenttype
072 7a kon2 swepub-publicationtype
100a Danelljan, Martin,d 1989-u Linköpings universitet,Datorseende,Tekniska fakulteten4 aut0 (Swepub:liu)marda26
2451 0a Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking
264 1b Institute of Electrical and Electronics Engineers (IEEE),c 2016
338 a electronic2 rdacarrier
500 a Funding Agencies|SSF (CUAS); VR (EMC2); VR (ELLIIT); Wallenberg Autonomous Systems Program; NSC; Nvidia
520 a Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be down-weighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3.8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.
650 7a NATURVETENSKAPx Data- och informationsvetenskapx Datorseende och robotik0 (SwePub)102072 hsv//swe
650 7a NATURAL SCIENCESx Computer and Information Sciencesx Computer Vision and Robotics0 (SwePub)102072 hsv//eng
700a Häger, Gustav,d 1988-u Linköpings universitet,Datorseende,Tekniska fakulteten4 aut0 (Swepub:liu)gusha40
700a Khan, Fahad Shahbaz,d 1983-u Linköpings universitet,Datorseende,Tekniska fakulteten4 aut0 (Swepub:liu)fahkh30
700a Felsberg, Michael,d 1974-u Linköpings universitet,Datorseende,Tekniska fakulteten4 aut0 (Swepub:liu)micfe03
710a Linköpings universitetb Datorseende4 org
773t 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)d : Institute of Electrical and Electronics Engineers (IEEE)g , s. 1430-1438q <1430-1438z 9781467388511z 9781467388528
856u https://liu.diva-portal.org/smash/get/diva2:1104732/FULLTEXT02.pdfx primaryx Raw objecty fulltext:postprint
8564 8u https://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-137882
8564 8u https://doi.org/10.1109/CVPR.2016.159

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy