SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Hum Yan Chai) "

Sökning: WFRF:(Hum Yan Chai)

  • Resultat 1-10 av 10
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Hum, Yan Chai, et al. (författare)
  • A contrast enhancement framework under uncontrolled environments based on just noticeable difference
  • 2022
  • Ingår i: Signal processing. Image communication. - : Elsevier. - 0923-5965 .- 1879-2677. ; 103
  • Tidskriftsartikel (refereegranskat)abstract
    • Image contrast enhancement refers to an operation of remapping the pixels’ values of an image to emphasize desired information in the image. In this work, we propose a novel pixel-based (local) contrast enhancement algorithm, based on the human visual perception. First, we make an observation that pixels with lower regional contrast should be amplified for the purpose of enhancing the contrast and pixels with higher regional contrast should be suppressed to avoid undesired over-enhancement. To determine the quality of the regional contrast in the image (either lower or higher), a reference image will be created using a proposed global based contrast enhancement method (termed as Mean Brightness Bidirectional Histogram Equalization in the paper) for fast computation reason. To quantify the abovementioned regional contrast, we propose a method based on human visual perception taking Just Noticeable Difference (JND) into account. In short, our proposed algorithm is able to limit the enhancement of well-contrasted regions and enhance the poor contrast regions in an image. Both objective quality and subjective quality experimental results suggested that the proposed algorithm enhances images consistently across images with different dynamic range. We conclude that the proposed algorithm exhibits excellent consistency in producing satisfactory result for different type of images. It is important to note that the algorithm can be directly implemented in color space and not limited only to grayscale. The proposed algorithm can be obtained from the following GitHub link: https://github.com/UTARSL1/CHE.
  •  
2.
  •  
3.
  • Mokayed, Hamam, et al. (författare)
  • Fractional B-Spline Wavelets and U-Net Architecture for Robust and Reliable Vehicle Detection in Snowy Conditions
  • 2024
  • Ingår i: Sensors. - : Multidisciplinary Digital Publishing Institute (MDPI). - 1424-8220. ; 24:12
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses the critical need for advanced real-time vehicle detection methodologies in Vehicle Intelligence Systems (VIS), especially in the context of using Unmanned Aerial Vehicles (UAVs) for data acquisition in severe weather conditions, such as heavy snowfall typical of the Nordic region. Traditional vehicle detection techniques, which often rely on custom-engineered features and deterministic algorithms, fall short in adapting to diverse environmental challenges, leading to a demand for more precise and sophisticated methods. The limitations of current architectures, particularly when deployed in real-time on edge devices with restricted computational capabilities, are highlighted as significant hurdles in the development of efficient vehicle detection systems. To bridge this gap, our research focuses on the formulation of an innovative approach that combines the fractional B-spline wavelet transform with a tailored U-Net architecture, operational on a Raspberry Pi 4. This method aims to enhance vehicle detection and localization by leveraging the unique attributes of the NVD dataset, which comprises drone-captured imagery under the harsh winter conditions of northern Sweden. The dataset, featuring 8450 annotated frames with 26,313 vehicles, serves as the foundation for evaluating the proposed technique. The comparative analysis of the proposed method against state-of-the-art detectors, such as YOLO and Faster RCNN, in both accuracy and efficiency on constrained devices, emphasizes the capability of our method to balance the trade-off between speed and accuracy, thereby broadening its utility across various domains.
  •  
4.
  • Mokayed, Hamam, et al. (författare)
  • On Restricted Computational Systems, Real-time Multi-tracking and Object Recognition Tasks are Possible
  • 2022
  • Ingår i: 2022 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). - : IEEE. - 9781665486873 ; , s. 1523-1528
  • Konferensbidrag (refereegranskat)abstract
    • Intelligent surveillance systems are inherently computationally intensive. And with their ever-expanding utilization in both small-scale home security applications and on the national scale, the necessity for efficient computer vision processing is critical. To this end, we propose a framework that utilizes modern hardware by incorporating multi-threading and concurrency to facilitate the complex processes associated with object detection, tracking, and identification, enabling lower-powered systems to support such intelligent surveillance systems effectively. The proposed architecture provides an adaptable and robust processing pipeline, leveraging the thread pool design pattern. The developed method can achieve respectable throughput rates on low-powered or constrained compute platforms.
  •  
5.
  • Mudhalwadkar, Nikhil Prashant, et al. (författare)
  • Anime Sketch Colourization Using Enhanced Pix2pix GAN
  • 2023
  • Ingår i: Pattern Recognition: 7th Asian Conference, ACPR 2023, Kitakyushu, Japan, November 5–8, 2023, Proceedings Part I. - : Springer Nature. ; , s. 148-164
  • Konferensbidrag (refereegranskat)
  •  
6.
  • Saleh, Yahya Sherif Solayman Mohamed, et al. (författare)
  • How GANs assist in Covid-19 pandemic era: a review
  • 2024
  • Ingår i: Multimedia tools and applications. - : Springer Nature. - 1380-7501 .- 1573-7721. ; 83:10, s. 29915-29944
  • Forskningsöversikt (refereegranskat)
  •  
7.
  • Shirkhani, Shaghayegh, et al. (författare)
  • Study of AI-Driven Fashion Recommender Systems
  • 2023
  • Ingår i: SN Computer Science. - : Springer. - 2662-995X .- 2661-8907. ; 4:5
  • Tidskriftsartikel (refereegranskat)abstract
    • The rising diversity, volume, and pace of fashion manufacturing pose a considerable challenge in the fashion industry, making it difficult for customers to pick which product to purchase. In addition, fashion is an inherently subjective, cultural notion and an ensemble of clothing items that maintains a coherent style. In most of the domains in which Recommender Systems are developed (e.g., movies, e-commerce, etc.), the similarity evaluation is considered for recommendation. Instead, in the Fashion domain, compatibility is a critical factor. In addition, raw visual features belonging to product representations that contribute to most of the algorithm’s performances in the Fashion domain are distinguishable from the metadata of the products in other domains. This literature review summarizes various Artificial Intelligence (AI) techniques that have lately been used in recommender systems for the fashion industry. AI enables higher-quality recommendations than earlier approaches. This has ushered in a new age for recommender systems, allowing for deeper insights into user-item relationships and representations and the discovery patterns in demographical, textual, virtual, and contextual data. This work seeks to give a deeper understanding of the fashion recommender system domain by performing a comprehensive literature study of research on this topic in the past 10 years, focusing on image-based fashion recommender systems taking AI improvements into account. The nuanced conceptions of this domain and their relevance have been developed to justify fashion domain-specific characteristics.
  •  
8.
  • Voon, Wingates, et al. (författare)
  • Evaluating the effectiveness of stain normalization techniques in automated grading of invasive ductal carcinoma histopathological images
  • 2023
  • Ingår i: Scientific Reports. - : Springer Nature. - 2045-2322. ; 13:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.
  •  
9.
  •  
10.
  • Voon, Wingates, et al. (författare)
  • Performance analysis of seven Convolutional Neural Networks (CNNs) with transfer learning for Invasive Ductal Carcinoma (IDC) grading in breast histopathological images
  • 2022
  • Ingår i: Scientific Reports. - : Springer Nature. - 2045-2322. ; 12
  • Tidskriftsartikel (refereegranskat)abstract
    • Computer-aided Invasive Ductal Carcinoma (IDC) grading classification systems based on deep learning have shown that deep learning may achieve reliable accuracy in IDC grade classification using histopathology images. However, there is a dearth of comprehensive performance comparisons of Convolutional Neural Network (CNN) designs on IDC in the literature. As such, we would like to conduct a comparison analysis of the performance of seven selected CNN models: EfficientNetB0, EfficientNetV2B0, EfficientNetV2B0-21k, ResNetV1-50, ResNetV2-50, MobileNetV1, and MobileNetV2 with transfer learning. To implement each pre-trained CNN architecture, we deployed the corresponded feature vector available from the TensorFlowHub, integrating it with dropout and dense layers to form a complete CNN model. Our findings indicated that the EfficientNetV2B0-21k (0.72B Floating-Point Operations and 7.1 M parameters) outperformed other CNN models in the IDC grading task. Nevertheless, we discovered that practically all selected CNN models perform well in the IDC grading task, with an average balanced accuracy of 0.936 ± 0.0189 on the cross-validation set and 0.9308 ± 0.0211on the test set.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 10

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy