SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:1057 7149 OR L773:1941 0042 srt2:(2000-2004)"

Search: L773:1057 7149 OR L773:1941 0042 > (2000-2004)

  • Result 1-8 of 8
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Almansa, A., et al. (author)
  • Fingerprint enhancement by shape adaptation of scale-space operators with automatic scale selection
  • 2000
  • In: IEEE Transactions on Image Processing. - : IEEE Signal Processing Society. - 1057-7149 .- 1941-0042. ; 9:12, s. 2027-2042
  • Journal article (peer-reviewed)abstract
    • This work presents two mechanisms for processing fingerprint images; shape-adapted smoothing based on second moment descriptors and automatic scale selection based on normalized derivatives. The shape adaptation procedure adapts the smoothing operation to the local ridge structures, which allows interrupted ridges to be joined without destroying essential singularities such as branching points and enforces continuity of their directional fields. The Scale selection procedure estimates local ridge width and adapts the amount of smoothing to the local amount of noise. In addition, a ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model, and is used for spreading the results of shape adaptation into noisy areas. The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. The result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a Smoothed grey-level version of the input image. We propose that these general techniques should be of interest to developers of automatic fingerprint identification systems as well as in other applications of processing related types of imagery.
  •  
2.
  • Averbuch, A. Z., et al. (author)
  • Low bit-rate efficient compression for seismic data
  • 2001
  • In: IEEE Transactions on Image Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1057-7149 .- 1941-0042. ; 10:12, s. 1801-1814
  • Journal article (peer-reviewed)abstract
    • Compression is a relatively new introduced technique for seismic data operations. The main drive behind the use of data compression in seismic data is the very large size of seismic data acquired. Some of the most recent acquired marine seismic data sets exceed 10 Tbytes, and in fact there are currently seismic surveys planned with a volume of around 120 Tbytes. Thus, the need to compress these very large seismic data riles is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform. while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression.
  •  
3.
  • Lenz, Reiner (author)
  • Estimation of illumination characteristics
  • 2001
  • In: IEEE Transactions on Image Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1057-7149 .- 1941-0042. ; 10:7, s. 1031-1038
  • Journal article (peer-reviewed)abstract
    • The description of the relation between the one-parameter subgroups of a group and the differential operators in the Lie-algebra of the group is one of the major topics in Lie-theory. In this paper, we use this framework to derive a partial differential equation which describes the relation between the time-change of the spectral characteristics of the illumination source and the change of the color pixels in an image. In the first part of the paper, we introduce and justify the usage of conical coordinate systems in color space. In the second part we derive the differential equation describing the illumination change and in the last part we illustrate the algorithm with some simulation examples.
  •  
4.
  • Lenz, Reiner (author)
  • Two stage principal component analysis of color
  • 2002
  • In: IEEE Transactions on Image Processing. - 1057-7149 .- 1941-0042. ; 11:6, s. 630-635
  • Journal article (peer-reviewed)abstract
    • We introduce a two-stage analysis of color spectra. In the first processing stage, correlation with the first eigenvector of a spectral database is used to measure the intensity of a color spectrum. In the second step, a perspective projection is used to map the color spectrum to the hyperspace of spectra with first eigenvector coefficient equal to unity. The location in this hyperspace describes the chromaticity of the color spectrum. In this new projection space, a second basis of eigenvectors is computed and the projected spectrum is described by the expansion in this chromaticity basis. This description is possible since the space of color spectra as conical. We compare this two-stage process with traditional principal component analysis and find that the results of the new structure are closer to the structure of traditional chromaticity descriptors than traditional principal component analysis.
  •  
5.
  • Lundmark, A., et al. (author)
  • Hierarchical subsampling giving fractal regions
  • 2001
  • In: IEEE Transactions on Image Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1057-7149 .- 1941-0042. ; 10:1, s. 167-173
  • Journal article (peer-reviewed)abstract
    • Recursive image subsampling which yields support areas approaching fractals is described and analyzed using iterated function systems. The subsampling scheme is suitable in, e.g., hierarchical image processing and image coding schemes. For hexagonally sampled images a hierarchical subsampling structure is given which yields hexagon-like regions with fractal borders.
  •  
6.
  • Marklund, Olov (author)
  • An anisotropic evolution formulation applied in 2-D unwrapping of discontinuous phase surfaces
  • 2001
  • In: IEEE Transactions on Image Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1057-7149 .- 1941-0042. ; 10:11, s. 1700-1711
  • Journal article (peer-reviewed)abstract
    • In this paper, a new method to reconstruct piecewise continuous phase estimates using inphase and quadrature components acquired from interferometry measurements is derived and discussed. The method, based on the concept of anisotropic evolution formulations, is shown to be far less noise sensitive than similar methods operating on modulo-mapped data (i.e., traditional phase unwrapping methods). The method is able to produce reliable phase estimates from data containing complex sheared structures in combination with high noise content without relying on user-defined weights.
  •  
7.
  • Meyer, F. G., et al. (author)
  • Fast adaptive wavelet packet image compression
  • 2000
  • In: IEEE Transactions on Image Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1057-7149 .- 1941-0042. ; 9:5, s. 792-800
  • Journal article (peer-reviewed)abstract
    • Wavelets are ill-suited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to zero, even at high bit rates. Our goal in this paper is to provide a fast numerical implementation of the best wavelet packet algorithm [1] in order to demonstrate that an advantage can be gained by constructing a basis adapted to a target image. Emphasis in this paper has been placed on developing algorithms that are computationally efficient. We developed a new fast two-dimensional (2-D) convolution-decimation algorithm with factorized nonseparable 2-D filters. The algorithm is four times faster than a standard convolution-decimation, An extensive evaluation of the algorithm was performed on a large class of textured images. Because of its ability to reproduce textures so well, the wavelet packet coder significantly out performs one of the best wavelet coder [2] on images such as Barbara and fingerprints, both visually and in term of PSNR.
  •  
8.
  • Wadstromer, N. (author)
  • An automatization of Barnsley's algorithm for the inverse problem of iterated function systems
  • 2003
  • In: IEEE Transactions on Image Processing. - 1057-7149 .- 1941-0042. ; 12:11, s. 1388-1397
  • Journal article (peer-reviewed)abstract
    • We present an automatization of Barnsley's manual algorithm for the solution of the inverse problem of iterated function systems (IFSs). The problem is to retrieve the number of mappings and the parameters of an IFS from a digital binary image approximating the attractor induced by the IFS. Barnsley et al. described a way to manually solve the inverse problem by identifying the fragments, of which the collage is composed, and then computing the parameters of the mappings. The automatic algorithm searches through a finite set of points in the parameter space determining a set of affine mappings. The algorithm uses the collage theorem and the Hausdorff metric. The inverse problem of IFSs is related to image coding of binary images. If the number of mappings and the parameters of an IFS, with not too many mappings, could be obtained from a binary image, then this would give an efficient representation of the image. It is shown that the inverse problem solved by the automatic algorithm has a solution and some experiments show that the automatic algorithm is able to retrieve an IFS, including the number of mappings, from a digital binary image approximating the attractor induced by the IFS.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-8 of 8

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view