SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kumar Pravin) srt2:(2010-2014)"

Sökning: WFRF:(Kumar Pravin) > (2010-2014)

  • Resultat 1-11 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Rana, Pravin Kumar, et al. (författare)
  • Prediction of Sea Ice Edge in the Antarctic Using GVF Snake Model
  • 2011
  • Ingår i: Journal of the Geological Society of India. - : Springer Science and Business Media LLC. - 0016-7622 .- 0974-6889. ; 78:2, s. 99-108
  • Tidskriftsartikel (refereegranskat)abstract
    • Antarctic sea ice cover plays an important role in shaping the earth's climate, primarily by insulating the ocean from the atmosphere and increasing the surface albedo. The convective processes accompanied with the sea ice formation result bottom water formation. The cold and dense bottom water moves towards the equator along the ocean basins and takes part in the global thermohaline circulation. Sea ice edge is a potential indicator of climate change. Additionally, fishing and commercial shipping activities as well as military submarine operations in the polar seas need reliable ice edge information. However, as the sea ice edge is unstable in time, the temporal validity of the estimated ice edge is often shorter than the time required to transfer the information to the operational user. Hence, an accurate sea ice edge prediction as well as determination is crucial for fine-scale geophysical modeling and for near-real-time operations. In this study, active contour modelling (known as Snake model) and non-rigid motion estimation techniques have been used for predicting the sea ice edge (SIE) in the Antarctic. For this purpose the SIE has been detected from sea ice concentration derived using special sensor microwave imager (SSM/I) observations. The 15% sea ice concentration pixels are being taken as the edge pixel between ice and water. The external force, gradient vector flow (GVF), of SIE for total the Antarctic region is parameterised for daily as well as weekly data set. The SIE is predicted at certain points using a statistical technique. These predicted points have been used to constitute a SIE using artificial intelligence technique, the gradient vector flow (GVF). The predicted edge has been validated with that of SSM/I. It is found that all the major curvatures have been captured by the predicated edge and it is in good agreement with that of the SSM/I observation.
  •  
2.
  •  
3.
  •  
4.
  • Ma, Zhanyu, et al. (författare)
  • Bayesian estimation of Dirichlet mixture model with variational inference
  • 2014
  • Ingår i: Pattern Recognition. - : Elsevier BV. - 0031-3203 .- 1873-5142. ; 47:9, s. 3143-3157
  • Tidskriftsartikel (refereegranskat)abstract
    • In statistical modeling, parameter estimation is an essential and challengeable task. Estimation of the parameters in the Dirichlet mixture model (DMM) is analytically intractable, due to the integral expressions of the gamma function and its corresponding derivatives. We introduce a Bayesian estimation strategy to estimate the posterior distribution of the parameters in DMM. By assuming the gamma distribution as the prior to each parameter, we approximate both the prior and the posterior distribution of the parameters with a product of several mutually independent gamma distributions. The extended factorized approximation method is applied to introduce a single lower-bound to the variational objective function and an analytically tractable estimation solution is derived. Moreover, there is only one function that is maximized during iterations and, therefore, the convergence of the proposed algorithm is theoretically guaranteed. With synthesized data, the proposed method shows the advantages over the EM-based method and the previously proposed Bayesian estimation method. With two important multimedia signal processing applications, the good performance of the proposed Bayesian estimation method is demonstrated.
  •  
5.
  • Parthasarathy, Srinivas, et al. (författare)
  • Denoising of volumetric depth confidence for view rendering
  • 2012
  • Ingår i: 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2012. - : IEEE. - 9781467349055 ; , s. 1-4
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we define volumetric depth confidence and propose a method to denoise this data by performing adaptive wavelet thresholding using three dimensional (3D) wavelet transforms. The depth information is relevant for emerging interactive multimedia applications such as 3D TV and free-viewpoint television (FTV). These emerging applications require high quality virtual view rendering to enable viewers to move freely in a dynamic real worldscene. Depth information of a real world scene from different viewpoints is used to render an arbitrary number of novel views. Usually, depth estimates of 3D object points from different viewpoints are inconsistent. This inconsistency of depth estimates affects the quality of view rendering negatively. Based on the superposition principle, we define a volumetric depth confidence description of the underlying geometry of natural 3D scenes by using these inconsistent depth estimates from different viewpoints. Our method denoises this noisy volumetric description, and with this, we enhance the quality of view rendering by up to 0.45 dB when compared to rendering with conventional MPEG depth maps.
  •  
6.
  • Rana, Pravin Kumar, 1982-, et al. (författare)
  • A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement
  • 2012
  • Ingår i: Proceedings - 2012 IEEE International Symposium on Multimedia, ISM 2012. - : IEEE. - 9780769548753 ; , s. 183-190
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further subclustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.
  •  
7.
  • Rana, Pravin Kumar, 1982-, et al. (författare)
  • Depth consistency testing for improved view interpolation
  • 2010
  • Ingår i: <em></em>. - 9781424481118 ; , s. 384-389
  • Konferensbidrag (refereegranskat)abstract
    • Multiview video will play a pivotal role in the next generation visual communication media services like three-dimensional (3D) television and free-viewpoint television. These advanced media services provide natural 3D impressions and enable viewers to move freely in a dynamic real world scene by changing the viewpoint. High quality virtual view interpolation is required to support free viewpoint viewing. Usually, depth maps of different viewpoints are used to reconstruct a novel view. As these depth maps are usually estimated individually by stereo-matching algorithms, they have very weak spatial consistency. The inconsistency of depth maps affects the quality of view interpolation. In this paper, we propose a method for depth consistency testing to improve view interpolation. The method addresses the problem by warping more than two depth maps from multiple reference viewpoints to the virtual viewpoint. We test the consistency among warped depth values and improve the depth value information of the virtual view. With that, we enhance the quality of the interpolated virtual view.
  •  
8.
  • Rana, Pravin Kumar, 1982-, et al. (författare)
  • Depth Pixel Clustering for Consistency Testing of Multiview Depth
  • 2012
  • Ingår i: European Signal Processing Conference. - 9781467310680 ; , s. 1119-1123
  • Konferensbidrag (refereegranskat)abstract
    • This paper proposes a clustering algorithm of depth pixels for consistency testing of multiview depth imagery. The testing addresses the inconsistencies among estimated depth maps of real world scenes by validating depth pixel connection evidence based on a hard connection threshold. With the proposed algorithm, we test the consistency among depth values generated from multiple depth observations using cluster adaptive connection thresholds. The connection threshold is based on statistical properties of depth pixels in a cluster or sub-cluster. This approach can improve the depth information of real world scenes at a given viewpoint. This allows us to enhance the quality of synthesized virtual views when compared to depth maps obtained by using fixed thresholding. Depth-image-based virtual view synthesis is widely used for upcoming multimedia services like three-dimensional television and free-viewpoint television.
  •  
9.
  • Rana, Pravin Kumar, 1982-, et al. (författare)
  • Multiview Depth Map Enhancement by Variational Bayes Inference Estimation of Dirichlet Mixture Models
  • 2013
  • Ingår i: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). - : IEEE. - 9781479903566 ; , s. 1528-1532
  • Konferensbidrag (refereegranskat)abstract
    • High quality view synthesis is a prerequisite for future free-viewpointtelevision. It will enable viewers to move freely in a dynamicreal world scene. Depth image based rendering algorithms willplay a pivotal role when synthesizing an arbitrary number of novelviews by using a subset of captured views and corresponding depthmaps only. Usually, each depth map is estimated individually bystereo-matching algorithms and, hence, shows lack of inter-viewconsistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency ofmultiview depth imagery. First, our approach classifies the colorinformation in the multiview color imagery by modeling color witha mixture of Dirichlet distributions where the model parameters areestimated in a Bayesian framework with variational inference. Second, using the resulting color clusters, we classify the correspondingdepth values in the multiview depth imagery. Each clustered depthimage is subject to further sub-clustering. Finally, the resultingmean of each sub-cluster is used to enhance the depth imagery atmultiple viewpoints. Experiments show that our approach improvesthe average quality of virtual views by up to 0.8 dB when comparedto views synthesized by using conventionally estimated depth maps.
  •  
10.
  • Rana, Pravin Kumar, et al. (författare)
  • Statistical methods for inter-view depth enhancement
  • 2014
  • Ingår i: 2014 3DTV-Conference. - : IEEE. - 9781479947584 ; , s. 6874755-
  • Konferensbidrag (refereegranskat)abstract
    • This paper briefly presents and evaluates recent advances in statistical methods for improving inter-view inconsistency in multiview depth imagery. View synthesis is vital in free-viewpoint television in order to allow viewers to move freely in a dynamic scene. Here, depth image-based rendering plays a pivotal role by synthesizing an arbitrary number of novel views by using a subset of captured views and corresponding depth maps only. Usually, each depth map is estimated individually at different viewpoints by stereo matching and, hence, shows lack of inter-view consistency. This lack of consistency affects the quality of view synthesis negatively. This paper discusses two different approaches to enhance the inter-view depth consistency. The first one uses generative models based on multiview color and depth classification to assign a probabilistic weight to each depth pixel. The weighted depth pixels are utilized to enhance depth maps. The second one performs inter-view consistency testing in depth difference space to enhance the depth maps at multiple viewpoints. We comparatively evaluate these two methods and discuss their pros and cons for future work.
  •  
11.
  • Rana, Pravin Kumar, 1982-, et al. (författare)
  • View Interpolation with structured depth from multiview video
  • 2011
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we propose a method for interpolating multiview imagery which uses structured depth maps and multiview video plus inter-view connection information to represent a three-dimensional (3D) scene. The structured depth map consists of an inter-view consistent principal depth map and auxiliary depth information. The structured depth maps address the inconsistencies among estimated depth maps which may degrade the quality of rendered virtual views. Generated from multiple depth observations, the structuring of the depth maps is based on tested and adaptively chosen inter-view connections. Further, the use of connection information on the multiview video minimizes distortion due to varying illumination in the interpolated virtual views. Our approach improves the quality of rendered virtual views by up to 4 dB when compared to the reference MPEG view synthesis software for emerging multimedia services like 3D television and free-viewpoint television. Our approach obtains first the structured depth maps and the corresponding connection information. Second, it exploits the inter-view connection information when interpolating virtual views.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-11 av 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy