SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Rahaman G. M. Atiqur 1981 ) "

Search: WFRF:(Rahaman G. M. Atiqur 1981 )

  • Result 1-10 of 16
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Ahnaf, S.M. Azoad, et al. (author)
  • Understanding CNN's Decision Making on OCT-based AMD Detection
  • 2021
  • In: 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), 14-16 Sept. 2021. - : IEEE. - 9781665423632 - 9781665423649 ; , s. 1-4
  • Conference paper (peer-reviewed)abstract
    • Age-related Macular degeneration (AMD) is the third leading cause of incurable acute central vision loss. Optical coherence tomography (OCT) is a diagnostic process used for both AMD and diabetic macular edema (DME) detection. Spectral-domain OCT (SD-OCT), an improvement of traditional OCT, has revolutionized assessing AMD for its high acquiring rate, high efficiency, and resolution. To detect AMD from normal OCT scans many techniques have been adopted. Automatic detection of AMD has become popular recently. The use of a deep Convolutional Neural Network (CNN) has helped its cause vastly. Despite having achieved better performance, CNN models are often criticized for not giving any justification in decision-making. In this paper, we aim to visualize and critically analyze the decision of CNNs in context-based AMD detection. Multiple experiments were done using the DUKE OCT dataset, utilizing transfer learning in Resnet50 and Vgg16 model. After training the model for AMD detection, Gradient-weighted Class Activation Mapping (Grad-Cam) is used for feature visualization. With the feature mapped image, each layer mask was compared. We have found out that the Outer Nuclear layer to the Inner segment myeloid (ONL-ISM) has more predominance about 17.13% for normal and 6.64% for AMD in decision making.
  •  
2.
  • Islam, Sarder Tazul, et al. (author)
  • An Efficient Binary Descriptor to Describe Retinal Bifurcation Point for Image Registration
  • 2019
  • In: Pattern Recognition and Image Analysis. - Cham : Springer. - 9783030313319 - 9783030313326 ; , s. 543-552
  • Conference paper (peer-reviewed)abstract
    • Bifurcation points are typically considered as landmark points for retinal image registration. Robust detection, description and accurate matching of landmark points between images are crucial for successful registration of image pairs. This paper introduces a novel descriptor named Binary Descriptor for Retinal Bifurcation Point (BDRBP), so that bifurcation point can be described and matched more accurately. BDRBP uses four patterns that are reminiscent of Haar basis function. It relies on pixel intensity difference among groups of pixels within a patch centering on the bifurcation point to form a binary string. This binary string is the descriptor. Experiments are conducted on publicly available retinal image registration dataset named FIRE. The proposed descriptor has been compared with the state-of-the art Li Chen et al.’s method for bifurcation point description. Experiments show that bifurcation points can be described and matched with an accuracy of 86–90% with BDRBP, whereas, for Li Chen et al.’s method the accuracy is 43–78%.
  •  
3.
  • Jamil, Md Shafayat, et al. (author)
  • Advanced GradCAM++ : Improved Visual Explanations of CNN Decisions in Diabetic Retinopathy
  • 2023
  • In: Computer Vision and Image Analysis for Industry 4.0. - New York : Taylor & Francis Group. - 9781003256106 - 9781032164168 - 9781032187624 ; , s. 64-75
  • Book chapter (peer-reviewed)abstract
    • Convolutional neural network (CNN)-based methods have achieved state-of-the-art performance in solving several complex computer vision problems including assessment of diabetic retinopathy (DR). Despite this, CNN-based methods are often criticized as “black box” methods for providing limited to no understanding about their internal functioning. In recent years there has been an increased interest to develop explainable deep learning models, and this paper is an effort in that direction in the context of DR. Based on one of the best performing method called Grad-CAM++, we propose Advanced Grad-CAM++ to provide further improvement in visual explanations of CNN model predictions (when compared to Grad-CAM++), in terms of better localization of DR pathology as well as explaining occurrences of multiple DR pathology types in a fundus image. By keeping all the layers and operations as is, the proposed method adds an additional non-learnable bilateral convolutional layer between the input image and the very first learnable convolutional layer of Grad-CAM++. Experiments were conducted on fundus images collected from publicly available sources namely EyePACS and DIARETDB1. Intersection over Union (IoU) score between the ground truth and heatmap produced by the methods were used to quantitatively compare the performance.The overall IoU score for Advanced Grad-CAM++ is 0.179, whereas for Grad-CAM++ it is score 0.161. Thus an 11.18% improvement in agreement with the ground truths by the proposed method is inferable.
  •  
4.
  • Pal, Shuvro, Dr., et al. (author)
  • Image Forgery Detection Using CNN and Local Binary Pattern-Based Patch Descriptor
  • 2022
  • In: Innovations in Computational Intelligence and Computer Vision. - Singapore : Springer. - 9789811904745 - 9789811904752 ; , s. 429-439
  • Book chapter (peer-reviewed)abstract
    • This paper aims to propose a novel method to detect multiple types of image forgery. The method uses Local Binary Pattern (LBP) as a descriptive feature of the image patches. A uniquely designed convolutional neural network (LBPNet) is proposed where four VGG style blocks are used followed by a support vector machine (SVM) classifier. It uses ‘Swish’ activation function, ‘Adam’ optimizing function, a combination of ‘Binary Cross-Entropy’ and ‘Squared Hinge’ as the loss functions. The proposed method is trained and tested on 111,350 image patches generated from phase-I of IEEE IFS-TC Image Forensics Challenge dataset. Once trained, the results reveal that training such network with computed LBP patches of real and forged image can produce 98.96% validation and 98.84% testing accuracy with area under the curve (AUC) score of 0.988. The experimental result proves the efficacy of the proposed method with respect to the most state-of-the-art techniques.
  •  
5.
  • Protik, Pranta, et al. (author)
  • Automated Detection of Diabetic Foot Ulcer Using Convolutional Neural Network
  • 2023
  • In: The Fourth Industrial Revolution and Beyond. - Singapore : Springer Nature. - 9789811980312 - 9789811980343 - 9789811980329 ; , s. 565-576
  • Book chapter (peer-reviewed)abstract
    • Diabetic foot ulcers (DFU) are one of the major health complications for people with diabetes. It may cause limb amputation or lead to life-threatening situations if not detected and treated properly at an early stage. A diabetic patient has a 15–25% chance of developing DFU at a later stage in his or her life if proper foot care is not taken. Because of these high-risk factors, patients with diabetes need to have regular checkups and medications which cause a huge financial burden on both the patients and their families. Hence, the necessity of a cost-effective, re-mote, and fitting DFU diagnosis technique is imminent. This paper presents a convolutional neural network (CNN)-based approach for the automated detection of diabetic foot ulcers from the pictures of a patient’s feet. ResNet50 is used as the backbone of the Faster R-CNN which performed better than the original Faster R-CNN that uses VGG16. A total of 2000 images from the Diabetic Foot Ulcer Grand Challenge 2020 (DFUC2020) dataset have been used for the experiment. The proposed method obtained precision, recall, F1-score, and mean average precision of 77.3%, 89.0%, 82.7%, and 71.3%, respectively, in DFU detection which is better than results obtained by the original Faster R-CNN.
  •  
6.
  • Rahaman, G. M. Atiqur, 1981-, et al. (author)
  • A Novel Approach to Using Spectral Imaging to Classify Dyes in Colored Fibers
  • 2020
  • In: Sensors. - : MDPI. - 1424-8220. ; 20:16
  • Journal article (peer-reviewed)abstract
    • In the field of cultural heritage, applied dyes on textiles are studied to explore their great artistic and historic values. Dye analysis is essential and important to plan correct restoration, preservation and display strategy in museums and art galleries. However, most of the existing diagnostic technologies are destructive to the historical objects. In contrast to that, spectral reflectance imaging is potential as a non-destructive and spatially resolved technique. There have been hardly any studies in classification of dyes in textile fibers using spectral imaging. In this study, we show that spectral imaging with machine learning technique is capable in preliminary screening of dyes into the natural or synthetic class. At first, sparse logistic regression algorithm is applied on reflectance data of dyed fibers to determine some discriminating bands. Then support vector machine algorithm (SVM) is applied for classification considering the reflectance of the selected spectral bands. The results show nine selected bands in short wave infrared region (SWIR, 1000–2500 nm) classify dyes with 97.4% accuracy (kappa 0.94). Interestingly, the results show that fairly accurate dye classification can be achieved using the bands at 1480nm, 1640 nm, and 2330 nm. This indicates possibilities to build an inexpensive handheld screening device for field studies.
  •  
7.
  • Rahaman, G M Atiqur, Dr. 1981-, et al. (author)
  • Deep Learning based Aerial Image Segmentation for Computing Green Area Factor
  • 2022
  • In: 2022 10th European Workshop on Visual Information Processing (EUVIP). - : IEEE. - 9781665466233 - 9781665466240
  • Conference paper (peer-reviewed)abstract
    • The Green Area Factor(GYF) is an aggregate norm used as an index to quantify how much eco-efficient surface exists in a given area. Although the GYF is a single number, it expresses several different contributions of natural objects to the ecosystem. It is used as a planning tool to create and manage attractive urban environments ensuring the existence of required green/blue elements. Currently, the GYF model is gaining rapid attraction by different communities. However, calculating the GYF value is challenging as significant amount of manual effort is needed. In this study, we present a novel approach for automatic extraction of the GYF value from aerial imagery using semantic segmentation results. For model training and validation a set of RGB images captured by Drone imaging system is used. Each image is annotated into trees, grass, soil/open surface, building, and road. A modified U-net deep learning architecture is used for the segmentation of various objects by classifying each pixel into one of the semantic classes. From the segmented image we calculate the class-wise fractional area coverages that are used as input into the simplified GYF model called Sundbyberg for calculating the GYF value. Experimental results yield that the deep learning method provides about 92% mean IoU for test image segmentation and corresponding GYF value is 0.34.
  •  
8.
  • Rahaman, G. M. Atiqur, 1981-, et al. (author)
  • Enhanced color visualization by spectral imaging : An application in cultural heritage
  • 2017
  • In: 2017 IEEE International Conference on Imaging, Vision & Pattern Recognition (icIVPR), 13-14 Feb. 2017. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781509060047 - 9781509060054
  • Conference paper (peer-reviewed)abstract
    • Color is an effective communication media in the objects of Art and historical (A&H) significance. However, as age increases, the objects become prone to color change through weather conditions, handling, display or preservation tasks. Therefore, to monitor the overall color change or to detect discolored area, it is important to precisely visualize the colored surface. This paper shows that RGB values computed using surface reflectance in (400-1000) nm wavelength range are capable to automatcially highlight any subtle color defect. Classical carpets are chosen to exemplify the outputs in this study. The extended CIE color matching functions based visualization method is most effective to render each multivariate data point by a single color. The defective areas of the surface in the resulting images appear strong to be detected readily. However, the conventional RGB colors fail mostly to reveal these color defects. Since spectral imaging is non-destructive and wide-area resolved, presented technique offers a comprehensive understanding of the color conditions of the A&H objects. So the visualization method should help the conservators to make informative decisions about different conservation and restoration strategies.
  •  
9.
  • Rahaman, G M Atiqur, 1981-, et al. (author)
  • Extension of Murray-Davies tone reproduction model by adding edge effect of halftone dots
  • 2014
  • In: Proceedings of SPIE - The International Society for Optical Engineering. - San Francisco, California, United States : SPIE - International Society for Optical Engineering. - 9780819499356 ; , s. Art. no. 90180F-
  • Conference paper (peer-reviewed)abstract
    • We propose expanding the Murray-Davies formula by adding the effect of edges of solid inks in a halftoned image. The expanded formula takes into account the spectral reflectance of paper white, full tone ink and mixed area scaled by the fractional area coverages. Here, mixed area mainly refers to the edge of an inked dot where the density is very low, and lateral exchange of photons can occur. Also, in such area the paper micro components may have higher scattering power than ink, especially, in uncoated paper. Our methodology uses cyan, magenta and yellow separation ramps printed on different papers by impact and non-impact based printing technologies. The samples include both frequency and amplitude modulation halftoning methods of various print resolutions. Based on pixel values, the captured microscale halftoned image is divided into three categories: solid ink, mixed area, and unprinted paper between the dots. The segmented images are then used to measure the fractional area coverage that the model receives as parameters. We have derived the characteristic reflectance spectrum of mixed area by rearranging the expanded formula and replacing the predicted term with the measured value using half of the maximum colorant coverage. Performance has clearly improved over the Murray-Davies model with and without dot gain compensation, more importantly, preserving the linear additivity of reflectance of the classical physics-based model.
  •  
10.
  • Rahaman, G M Atiqur, 1981-, et al. (author)
  • Retinal Spectral Image Analysis Methods using Spectral Reflectance Pattern Recognition
  • 2013
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin Heidelberg : Springer. - 9783642366994 ; , s. 224-238
  • Conference paper (peer-reviewed)abstract
    • Conventional 3-channel color images have limited information andquality dependency on parametric conditions. Hence, spectral imaging andreproduction is desired in many color applications to record and reproduce thereflectance of objects. Likewise RGB images lack sufficient information tosuccessfully analyze diabetic retinopathy. In this case, spectral imaging may bethe alternative solution. In this article, we propose a new supervised techniqueto detect and classify the abnormal lesions in retinal spectral reflectance imagesaffected by diabetes. The technique employs both stochastic and deterministicspectral similarity measures to match the desired reflectance pattern. At first, itclassifies a pixel as normal or abnormal depending on the probabilistic behaviorof training spectra. The final decision is made evaluating the geometricsimilarity. We assessed several multispectral object detection methodsdeveloped for other applications. They could not proof to be the solution. Theresults were interpreted using receiver operating characteristics (ROC) curvesanalysis.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 16

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view