SwePub
Sök i LIBRIS databas

  Extended search

L773:0167 8655 OR L773:1872 7344
 

Search: L773:0167 8655 OR L773:1872 7344 > (2020-2022) > A New DCT-PCM Metho...

  • Mokayed, HamamLuleå tekniska universitet,EISLAB (author)

A New DCT-PCM Method for License Plate Number Detection in Drone Images

  • Article/chapterEnglish2021

Publisher, publication year, extent ...

  • Elsevier,2021
  • printrdacarrier

Numbers

  • LIBRIS-ID:oai:DiVA.org:ltu-84640
  • https://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84640URI
  • https://doi.org/10.1016/j.patrec.2021.05.002DOI

Supplementary language notes

  • Language:English
  • Summary in:English

Part of subdatabase

Classification

  • Subject category:ref swepub-contenttype
  • Subject category:art swepub-publicationtype

Notes

  • Validerad;2021;Nivå 2;2021-06-14 (beamah);Forskningsfinansiärer: Ministry of Higher Education, Malaysia (FP104-2020); Natural Science Foundation of China (61672273, 61832008); Science Foundation for Distinguished Young Scholars of Jiangsu (BK20160021)
  • License plate number detection in drone images is a complex problem because the images are generally captured at oblique angles and pose several challenges like perspective distortion, non-uniform illumination effect, degradations, blur, occlusion, loss of visibility etc. Unlike, most existing methods that focus on images captured by orthogonal direction (head-on), the proposed work focuses on drone text images. Inspired by the Phase Congruency Model (PCM), which is invariant to non-uniform illuminations, contrast variations, geometric transformation and to some extent to distortion, we explore the combination of DCT and PCM (DCT-PCM) for detecting license plate number text in drone images. Motivated by the strong discriminative power of deep learning models, the proposed method exploits fully connected neural networks for eliminating false positives to achieve better detection results. Furthermore, the proposed work constructs working model that fits for real environment. To evaluate the proposed method, we use our own dataset captured by drones and benchmark license plate datasets, namely, Medialab for experimentation. We also demonstrate the effectiveness of the proposed method on benchmark natural scene text detection datasets, namely, SVT, MSRA-TD-500, ICDAR 2017 MLT and Total-Text.

Subject headings and genre

Added entries (persons, corporate bodies, meetings, titles ...)

  • Shivakumara, PalaiahnakoteFaculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia (author)
  • Hock Woon, HonAdvanced Informatics Lab, MIMOS Berhad, Kuala Lumpur, Malaysia (author)
  • Kankanhalli, MohanSchool of Computing, National University of Singapore, Singapore (author)
  • Lu, TongNational Key Lab for Novel Software Technology, Nanjing University, Nanjing, China (author)
  • Pal, UmapadaComputer Vision and Pattern Recognition Unit, Indian Statistical Institute, Kolkata, India (author)
  • Luleå tekniska universitetEISLAB (creator_code:org_t)

Related titles

  • In:Pattern Recognition Letters: Elsevier148, s. 45-530167-86551872-7344

Internet link

Find in a library

To the university's database

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view