SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:0899 9457 "

Sökning: L773:0899 9457

  • Resultat 1-10 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ahlberg, Jörgen, et al. (författare)
  • Face tracking for model-based coding and face animation
  • 2003
  • Ingår i: International journal of imaging systems and technology (Print). - : John Wiley & Sons. - 0899-9457 .- 1098-1098. ; 13:1, s. 8-22
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.
  •  
2.
  • Björkman, Mårten, et al. (författare)
  • Vision in the real world : Finding, attending and recognizing objects
  • 2006
  • Ingår i: International journal of imaging systems and technology (Print). - : Wiley. - 0899-9457 .- 1098-1098. ; 16:5, s. 189-208
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper we discuss the notion of a seeing system that uses vision to interact with its environment. The requirements on such a system depend on the tasks it is involved in and should be evaluated with these in mind. Here we consider the task of finding and recognizing objects in the real world. After a discussion of the needed functionalities and issues about the design we present an integrated real-time vision system capable of finding, attending and recognizing objects in real settings. The system is based on a dual set of cameras, a wide field set for attention and a foveal one for recognition. The continuously running attentional process uses top-down object characteristics in terms of hue and 3D size. Recognition is performed with objects of interest foveated and segmented from its background. We describe the system structure as well as the different components in detail and present experimental evaluations of its overall performance.
  •  
3.
  •  
4.
  •  
5.
  • Carmona, Pedro Latorre, et al. (författare)
  • Performance evaluation of dimensionality reduction techniques for multispectral images
  • 2007
  • Ingår i: International journal of imaging systems and technology (Print). - : Institutionen för teknik och naturvetenskap. - 0899-9457 .- 1098-1098. ; 17:3, s. 202-217
  • Tidskriftsartikel (refereegranskat)abstract
    • We consider several collections of multispectral color signals and describe how linear and non-linear methods can be used to investigate their internal structure. We use databases consisting of blackbody radiators, approximated and measured daylight spectra, multispectral images of indoor and outdoor scenes under different illumination conditions and numerically computed color signals. We apply Principal Components Analysis, group-theoretical methods and three manifold learning methods: Laplacian Eigenmaps, ISOMAP and Conformal Component Analysis. Identification of low-dimensional structures in these databases is important for analysis, model building and compression and we compare the results obtained by applying the algorithms to the different databases.
  •  
6.
  • Danafar, Somayeh, et al. (författare)
  • A method for eye detection based on SVD transforms
  • 2006
  • Ingår i: International journal of imaging systems and technology (Print). - : Wiley. - 0899-9457 .- 1098-1098. ; 16:5, s. 222-229
  • Tidskriftsartikel (refereegranskat)abstract
    • A set of transforms (SVD transforms) were introduced in (Shahshahani and Tavakoli Targhi) for understanding images. These transforms have been applied to some problems in computer vision including segmentation, detection of objects in a texture environment, classification of textures, detection of cracks or other imperfections, etc, This technique is shown to be applicable to determination of the location of eyes in a facial image. This method makes no use of color cues, prior geometric knowledge or other assumptions and does not require training. It is also insensitive to local perturbations in lighting, change of orientation and pose, scaling, and complexity of the background including indoor and outdoor environments. The method can be used for eye tracking and has applications to face recognition. It has also been used in animal eye detection and differentiation.
  •  
7.
  • Elghamrawy, Sally M., et al. (författare)
  • Genetic-based adaptive momentum estimation for predicting mortality risk factors for COVID-19 patients using deep learning
  • 2022
  • Ingår i: International journal of imaging systems and technology (Print). - : John Wiley & Sons. - 0899-9457 .- 1098-1098. ; 32:2, s. 614-628
  • Tidskriftsartikel (refereegranskat)abstract
    • The mortality risk factors for coronavirus disease (COVID-19) must be early predicted, especially for severe cases, to provide intensive care before they develop to critically ill immediately. This paper aims to develop an optimized convolution neural network (CNN) for predicting mortality risk factors for COVID-19 patients. The proposed model supports two types of input data clinical variables and the computed tomography (CT) scans. The features are extracted from the optimized CNN phase and then applied to the classification phase. The CNN model's hyperparameters were optimized using a proposed genetic-based adaptive momentum estimation (GB-ADAM) algorithm. The GB-ADAM algorithm employs the genetic algorithm (GA) to optimize Adam optimizer's configuration parameters, consequently improving the classification accuracy. The model is validated using three recent cohorts from New York, Mexico, and Wuhan, consisting of 3055, 7497,504 patients, respectively. The results indicated that the most significant mortality risk factors are: CD 8+ T Lymphocyte (Count), D-dimer greater than 1 Ug/ml, high values of lactate dehydrogenase (LDH), C-reactive protein (CRP), hypertension, and diabetes. Early identification of these factors would help the clinicians in providing immediate care. The results also show that the most frequent COVID-19 signs in CT scans included ground-glass opacity (GGO), followed by crazy-paving pattern, consolidations, and the number of lobes. Moreover, the experimental results show encouraging performance for the proposed model compared with different predicting models. 
  •  
8.
  • Gour, Mahesh, et al. (författare)
  • Robust nuclei segmentation with encoder‐decoder network from the histopathological images
  • 2024
  • Ingår i: International journal of imaging systems and technology (Print). - : Wiley. - 0899-9457 .- 1098-1098. ; 34:4
  • Tidskriftsartikel (refereegranskat)abstract
    • uclei segmentation is a prerequisite and an essential step in cancer detection and prognosis. Automatic nuclei segmentation from the histopathological images is challenging due to nuclear overlap, disease types, chromatic stain variability, and cytoplasmic morphology differences. Furthermore, it is demanding to develop a single accurate method for segmenting nuclei of different organs because of the diversity in nuclei size, shape, and appearance across the various organs. To address these challenges, we developed a robust Encoder-Decoder network for nuclei segmentation from the multi-organ histopathological images. In this approach, we utilize a pre-trained EfficientNet-B4 as an Encoder subnetwork and design a new Decoder subnetwork architecture. Additionally, we have applied morphological operation-based post-processing to improve the segmentation results. The performance of our approach has been evaluated on three public datasets, namely, Kumar, TNBC, and CPM-17 datasets, which contain histopathological images of seven organs, one organ, and four organs, respectively. The proposed method achieved an aggregated Jacquard index of 0.636, 0.611, and 0.706 on Kumar, TNBC, and CPM-17 datasets, respectively. Our proposed approach also shows superiority over the existing methods. 
  •  
9.
  • Gupta, Anindya, et al. (författare)
  • Detection of pulmonary micronodules in computed tomography images and false positive reduction using 3D convolutional neural networks
  • 2020
  • Ingår i: International journal of imaging systems and technology (Print). - : Wiley. - 0899-9457 .- 1098-1098. ; 30:2, s. 327-339
  • Tidskriftsartikel (refereegranskat)abstract
    • Manual detection of small uncalcified pulmonary nodules (diameter <4 mm) in thoracic computed tomography (CT) scans is a tedious and error‐prone task. Automatic detection of disperse micronodules is, thus, highly desirable for improved characterization of the fatal and incurable occupational pulmonary diseases. Here, we present a novel computer‐assisted detection (CAD) scheme specifically dedicated to detect micronodules. The proposed scheme consists of a candidate‐screening module and a false positive (FP) reduction module. The candidate‐screening module is initiated by a lung segmentation algorithm and is followed by a combination of 2D/3D features‐based thresholding parameters to identify plausible micronodules. The FP reduction module employs a 3D convolutional neural network (CNN) to classify each identified candidate. It automatically encodes the discriminative representations by exploiting the volumetric information of each candidate. A set of 872 micro‐nodules in 598 CT scans marked by at least two radiologists are extracted from the Lung Image Database Consortium and Image Database Resource Initiative to test our CAD scheme. The CAD scheme achieves a detection sensitivity of 86.7% (756/872) with only 8 FPs/scan and an AUC of 0.98. Our proposed CAD scheme efficiently identifies micronodules in thoracic scans with only a small number of FPs. Our experimental results provide evidence that the automatically generated features by the 3D CNN are highly discriminant, thus making it a well‐suited FP reduction module of a CAD scheme.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy