SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Chan Chee Seng) "

Sökning: WFRF:(Chan Chee Seng)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Hsu, Pohao, et al. (författare)
  • Extremely Low-light Image Enhancement with Scene Text Restoration
  • 2022
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. - 9781665490627 ; 2022, s. 317-323
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved. However, we found out that most of these methods could not sufficiently recover the image details, for instance the texts in the scene. In this paper, a novel image enhancement framework is proposed to specifically restore the scene texts, as well as the overall quality of the image simultaneously under extremely low-light images conditions. Particularly, we employed a selfregularised attention map, an edge map, and a novel text detection loss. The quantitative and qualitative experimental results have shown that the proposed model outperforms stateof-the-art methods in terms of image restoration, text detection, and text spotting on See In the Dark and ICDAR15 datasets.
  •  
2.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2013 challenge results
  • 2013
  • Ingår i: 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE. - 9781479930227 ; , s. 98-111
  • Konferensbidrag (refereegranskat)abstract
    • Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website(1).
  •  
3.
  • Lin, Che-Tsung, 1979, et al. (författare)
  • Cycle-Object Consistency for Image-to-Image Domain Adaptation
  • 2023
  • Ingår i: Pattern Recognition. - : Elsevier BV. - 0031-3203. ; 138
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent advances in generative adversarial networks (GANs) have been proven effective in performing domain adaptation for object detectors through data augmentation. While GANs are exceptionally successful, those methods that can preserve objects well in the image-to-image translation task usually require an auxiliary task, such as semantic segmentation to prevent the image content from being too distorted. However, pixel-level annotations are difficult to obtain in practice. Alternatively, instance-aware image-translation model treats object instances and background separately. Yet, it requires object detectors at test time, assuming that off-the-shelf detectors work well in both domains. In this work, we present AugGAN-Det, which introduces Cycle-object Consistency (CoCo) loss to generate instance-aware translated images across complex domains. The object detector of the target domain is directly leveraged in generator training and guides the preserved objects in the translated images to carry target-domain appearances. Compared to previous models, which e.g., require pixel-level semantic segmentation to force the latent distribution to be object-preserving, this work only needs bounding box annotations which are significantly easier to acquire. Next, as to the instance-aware GAN models, our model, AugGAN-Det, internalizes global and object style-transfer without explicitly aligning the instance features. Most importantly, a detector is not required at test time. Experimental results demonstrate that our model outperforms recent object-preserving and instance-level models and achieves state-of-the-art detection accuracy and visual perceptual quality.
  •  
4.
  • Nah, Wan Jun, et al. (författare)
  • Rethinking Long-Tailed Visual Recognition with Dynamic Probability Smoothing and Frequency Weighted Focusing
  • 2023
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. ; , s. 435-439
  • Konferensbidrag (refereegranskat)abstract
    • Deep learning models trained on long-tailed (LT) datasets often exhibit bias towards head classes with high frequency. This paper highlights the limitations of existing solutions that combine class- and instance-level re-weighting loss in a naive manner. Specifically, we demonstrate that such solutions result in overfitting the training set, significantly impacting the rare classes. To address this issue, we propose a novel loss function that dynamically reduces the influence of outliers and assigns class-dependent focusing parameters. We also introduce a new long-tailed dataset, ICText-LT, featuring various image qualities and greater realism than artificially sampled datasets. Our method has proven effective, outperforming existing methods through superior quantitative results on CIFAR-LT, Tiny ImageNet-LT, and our new ICText-LT datasets. The source code and new dataset are available at \url{https://github.com/nwjun/FFDS-Loss}
  •  
5.
  • Ng, Chun Chet, et al. (författare)
  • When IC meets text: Towards a rich annotated integrated circuit text dataset
  • 2024
  • Ingår i: Pattern Recognition. - 0031-3203. ; 147
  • Tidskriftsartikel (refereegranskat)abstract
    • Automated Optical Inspection (AOI) is a process that uses cameras to autonomously scan printed circuit boards for quality control. Text is often printed on chip components, and it is crucial that this text is correctly recognized during AOI, as it contains valuable information. In this paper, we introduce \textit{ICText}, the largest dataset for text detection and recognition on integrated circuits. Uniquely, it includes labels for character quality attributes such as low contrast, blurry, and broken. While loss-reweighting and Curriculum Learning (CL) have been proposed to improve object detector performance by balancing positive and negative samples and gradually training the model from easy to hard samples, these methods have had limited success with one-stage object detectors commonly used in industry. To address this, we propose Attribute-Guided Curriculum Learning (AGCL), which leverages the labeled character quality attributes in \textit{ICText}. Our extensive experiments demonstrate that AGCL can be applied to different detectors in a plug-and-play fashion to achieve higher Average Precision (AP), significantly outperforming existing methods on \textit{ICText} without any additional computational overhead during inference. Furthermore, we show that AGCL is also effective on the generic object detection dataset Pascal VOC. Our code and dataset will be publicly available at \href{https://github.com/chunchet-ng/ICText-AGCL}{https://github.com/chunchet-ng/ICText-AGCL}.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy