SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kew Jie Long) "

Sökning: WFRF:(Kew Jie Long)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Beh, Jing Chong, et al. (författare)
  • CyEDA : CYCLE OBJECT EDGE CONSISTENCY DOMAIN ADAPTATION
  • 2022
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. ; , s. 2986-2990
  • Konferensbidrag (refereegranskat)abstract
    • Despite the advent of domain adaptation methods, most of them still struggle in preserving the instance level details of images when performing global level translation. While there are instance level translation methods that can retain the instance level details well, most of them require either pre-train object detection/segmentation network and annotation labels. In this work, we propose a novel method namely CyEDA to perform global level domain adaptation that taking care of image contents without any pre-train networks integration or annotation labels. That is, we introduce masking and cycle-object edge consistency loss which exploit the preservation of image objects. We show that our approach is able to outperform other SOTAs in terms of image quality and FID score in both BDD100K and GTA datasets.
  •  
2.
  • Hsu, Pohao, et al. (författare)
  • Extremely Low-light Image Enhancement with Scene Text Restoration
  • 2022
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. - 9781665490627 ; 2022-August, s. 317-323
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved. However, we found out that most of these methods could not sufficiently recover the image details, for instance the texts in the scene. In this paper, a novel image enhancement framework is proposed to specifically restore the scene texts, as well as the overall quality of the image simultaneously under extremely low-light images conditions. Particularly, we employed a selfregularised attention map, an edge map, and a novel text detection loss. The quantitative and qualitative experimental results have shown that the proposed model outperforms stateof-the-art methods in terms of image restoration, text detection, and text spotting on See In the Dark and ICDAR15 datasets.
  •  
3.
  • Lin, Che-Tsung, 1979, et al. (författare)
  • Cycle-Object Consistency for Image-to-Image Domain Adaptation
  • 2023
  • Ingår i: Pattern Recognition. - : Elsevier BV. - 0031-3203. ; 138
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent advances in generative adversarial networks (GANs) have been proven effective in performing domain adaptation for object detectors through data augmentation. While GANs are exceptionally successful, those methods that can preserve objects well in the image-to-image translation task usually require an auxiliary task, such as semantic segmentation to prevent the image content from being too distorted. However, pixel-level annotations are difficult to obtain in practice. Alternatively, instance-aware image-translation model treats object instances and background separately. Yet, it requires object detectors at test time, assuming that off-the-shelf detectors work well in both domains. In this work, we present AugGAN-Det, which introduces Cycle-object Consistency (CoCo) loss to generate instance-aware translated images across complex domains. The object detector of the target domain is directly leveraged in generator training and guides the preserved objects in the translated images to carry target-domain appearances. Compared to previous models, which e.g., require pixel-level semantic segmentation to force the latent distribution to be object-preserving, this work only needs bounding box annotations which are significantly easier to acquire. Next, as to the instance-aware GAN models, our model, AugGAN-Det, internalizes global and object style-transfer without explicitly aligning the instance features. Most importantly, a detector is not required at test time. Experimental results demonstrate that our model outperforms recent object-preserving and instance-level models and achieves state-of-the-art detection accuracy and visual perceptual quality.
  •  
4.
  • Nah, Wan Jun, et al. (författare)
  • Rethinking Long-Tailed Visual Recognition with Dynamic Probability Smoothing and Frequency Weighted Focusing
  • 2023
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. ; , s. 435-439
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning models trained on long-tailed (LT) datasets often exhibit bias towards head classes with high frequency. This paper highlights the limitations of existing solutions that combine class- and instance-level re-weighting loss in a naive manner. Specifically, we demonstrate that such solutions result in overfitting the training set, significantly impacting the rare classes. To address this issue, we propose a novel loss function that dynamically reduces the influence of outliers and assigns class-dependent focusing parameters. We also introduce a new long-tailed dataset, ICText-LT, featuring various image qualities and greater realism than artificially sampled datasets. Our method has proven effective, outperforming existing methods through superior quantitative results on CIFAR-LT, Tiny ImageNet-LT, and our new ICText-LT datasets. The source code and new dataset are available at \url{https://github.com/nwjun/FFDS-Loss}
  •  
5.
  • Ng, Chun Chet, et al. (författare)
  • When IC meets text: Towards a rich annotated integrated circuit text dataset
  • 2024
  • Ingår i: Pattern Recognition. - 0031-3203. ; 147
  • Tidskriftsartikel (refereegranskat)abstract
    • Automated Optical Inspection (AOI) is a process that uses cameras to autonomously scan printed circuit boards for quality control. Text is often printed on chip components, and it is crucial that this text is correctly recognized during AOI, as it contains valuable information. In this paper, we introduce \textit{ICText}, the largest dataset for text detection and recognition on integrated circuits. Uniquely, it includes labels for character quality attributes such as low contrast, blurry, and broken. While loss-reweighting and Curriculum Learning (CL) have been proposed to improve object detector performance by balancing positive and negative samples and gradually training the model from easy to hard samples, these methods have had limited success with one-stage object detectors commonly used in industry. To address this, we propose Attribute-Guided Curriculum Learning (AGCL), which leverages the labeled character quality attributes in \textit{ICText}. Our extensive experiments demonstrate that AGCL can be applied to different detectors in a plug-and-play fashion to achieve higher Average Precision (AP), significantly outperforming existing methods on \textit{ICText} without any additional computational overhead during inference. Furthermore, we show that AGCL is also effective on the generic object detection dataset Pascal VOC. Our code and dataset will be publicly available at \href{https://github.com/chunchet-ng/ICText-AGCL}{https://github.com/chunchet-ng/ICText-AGCL}.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy