SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ng Chun Chet) "

Sökning: WFRF:(Ng Chun Chet)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Hsu, Pohao, et al. (författare)
  • Extremely Low-light Image Enhancement with Scene Text Restoration
  • 2022
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. - 9781665490627 ; 2022-August, s. 317-323
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved. However, we found out that most of these methods could not sufficiently recover the image details, for instance the texts in the scene. In this paper, a novel image enhancement framework is proposed to specifically restore the scene texts, as well as the overall quality of the image simultaneously under extremely low-light images conditions. Particularly, we employed a selfregularised attention map, an edge map, and a novel text detection loss. The quantitative and qualitative experimental results have shown that the proposed model outperforms stateof-the-art methods in terms of image restoration, text detection, and text spotting on See In the Dark and ICDAR15 datasets.
  •  
2.
  • Nah, Wan Jun, et al. (författare)
  • Rethinking Long-Tailed Visual Recognition with Dynamic Probability Smoothing and Frequency Weighted Focusing
  • 2023
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. ; , s. 435-439
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning models trained on long-tailed (LT) datasets often exhibit bias towards head classes with high frequency. This paper highlights the limitations of existing solutions that combine class- and instance-level re-weighting loss in a naive manner. Specifically, we demonstrate that such solutions result in overfitting the training set, significantly impacting the rare classes. To address this issue, we propose a novel loss function that dynamically reduces the influence of outliers and assigns class-dependent focusing parameters. We also introduce a new long-tailed dataset, ICText-LT, featuring various image qualities and greater realism than artificially sampled datasets. Our method has proven effective, outperforming existing methods through superior quantitative results on CIFAR-LT, Tiny ImageNet-LT, and our new ICText-LT datasets. The source code and new dataset are available at \url{https://github.com/nwjun/FFDS-Loss}
  •  
3.
  • Ng, Chun Chet, et al. (författare)
  • When IC meets text: Towards a rich annotated integrated circuit text dataset
  • 2024
  • Ingår i: Pattern Recognition. - 0031-3203. ; 147
  • Tidskriftsartikel (refereegranskat)abstract
    • Automated Optical Inspection (AOI) is a process that uses cameras to autonomously scan printed circuit boards for quality control. Text is often printed on chip components, and it is crucial that this text is correctly recognized during AOI, as it contains valuable information. In this paper, we introduce \textit{ICText}, the largest dataset for text detection and recognition on integrated circuits. Uniquely, it includes labels for character quality attributes such as low contrast, blurry, and broken. While loss-reweighting and Curriculum Learning (CL) have been proposed to improve object detector performance by balancing positive and negative samples and gradually training the model from easy to hard samples, these methods have had limited success with one-stage object detectors commonly used in industry. To address this, we propose Attribute-Guided Curriculum Learning (AGCL), which leverages the labeled character quality attributes in \textit{ICText}. Our extensive experiments demonstrate that AGCL can be applied to different detectors in a plug-and-play fashion to achieve higher Average Precision (AP), significantly outperforming existing methods on \textit{ICText} without any additional computational overhead during inference. Furthermore, we show that AGCL is also effective on the generic object detection dataset Pascal VOC. Our code and dataset will be publicly available at \href{https://github.com/chunchet-ng/ICText-AGCL}{https://github.com/chunchet-ng/ICText-AGCL}.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy