SwePub
Sök i LIBRIS databas

  Utökad sökning

onr:"swepub:oai:DiVA.org:liu-190654"
 

Sökning: onr:"swepub:oai:DiVA.org:liu-190654" > OW-DETR: Open-world...

  • Gupta, AkshitaIncept Inst Artificial Intelligence, U Arab Emirates (författare)

OW-DETR: Open-world Detection Transformer

  • Artikel/kapitelEngelska2022

Förlag, utgivningsår, omfång ...

  • IEEE COMPUTER SOC,2022
  • printrdacarrier

Nummerbeteckningar

  • LIBRIS-ID:oai:DiVA.org:liu-190654
  • https://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-190654URI
  • https://doi.org/10.1109/CVPR52688.2022.00902DOI

Kompletterande språkuppgifter

  • Språk:engelska
  • Sammanfattning på:engelska

Ingår i deldatabas

Klassifikation

  • Ämneskategori:ref swepub-contenttype
  • Ämneskategori:kon swepub-publicationtype

Anmärkningar

  • Funding Agencies|VR starting grant [2016-05543]
  • Open-world object detection (OWOD) is a challenging computer vision problem, where the task is to detect a known set of object categories while simultaneously identifying unknown objects. Additionally, the model must incrementally learn new classes that become known in the next training episodes. Distinct from standard object detection, the OWOD setting poses significant challenges for generating quality candidate proposals on potentially unknown objects, separating the unknown objects from the background and detecting diverse unknown objects. Here, we introduce a novel end-to-end transformer-based framework, OW-DETR, for open-world object detection. The proposed OW-DETR comprises three dedicated components namely, attention-driven pseudo-labeling, novelty classification and objectness scoring to explicitly address the aforementioned OWOD challenges. Our OW-DETR explicitly encodes multi-scale contextual information, possesses less inductive bias, enables knowledge transfer from known classes to the unknown class and can better discriminate between unknown objects and background. Comprehensive experiments are performed on two benchmarks: MS-COCO and PASCAL VOC. The extensive ablations reveal the merits of our proposed contributions. Further, our model outperforms the recently introduced OWOD approach, ORE, with absolute gains ranging from 1.8% to 3.3% in terms of unknown recall on MS-COCO. In the case of incremental object detection, OW-DETR outperforms the state-of-theart for all settings on PASCAL VOC. Our code is available at https://github.com/akshitac8/OW-DETR.

Ämnesord och genrebeteckningar

Biuppslag (personer, institutioner, konferenser, titlar ...)

  • Narayan, SanathIncept Inst Artificial Intelligence, U Arab Emirates (författare)
  • Joseph, K. J.IIT Hyderabad, India; Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates (författare)
  • Khan, SalmanAustralian Natl Univ, Australia; Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates (författare)
  • Khan, FahadLinköpings universitet,Datorseende,Tekniska fakulteten,Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates(Swepub:liu)fahkh30 (författare)
  • Shah, MubarakUniv Cent Florida, FL 32816 USA (författare)
  • Incept Inst Artificial Intelligence, U Arab EmiratesIIT Hyderabad, India; Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates (creator_code:org_t)

Sammanhörande titlar

  • Ingår i:2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR): IEEE COMPUTER SOC, s. 9225-923497816654694639781665469470

Internetlänk

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy