SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Li Yun) ;mspu:(conferencepaper)"

Sökning: WFRF:(Li Yun) > Konferensbidrag

  • Resultat 1-10 av 14
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Kristan, Matej, et al. (författare)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • Ingår i: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
2.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2015 challenge results
  • 2015
  • Ingår i: Proceedings 2015 IEEE International Conference on Computer Vision Workshops ICCVW 2015. - : IEEE. - 9780769557205 ; , s. 564-586
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website(1).
  •  
3.
  • Fu, Keren, 1988, et al. (författare)
  • Adaptive Multi-Level Region Merging for Salient Object Detection
  • 2014
  • Ingår i: British Machine Vision Conference (BMVC) 2014. ; , s. 11 -
  • Konferensbidrag (refereegranskat)abstract
    • Most existing salient object detection algorithms face the problem of either under or over-segmenting an image. More recent methods address the problem via multi-level segmentation. However, the number of segmentation levels is manually predetermined and only works well on specific class of images. In this paper, a new salient object detection scheme is presented based on adaptive multi-level region merging. A graph based merging scheme is developed to reassemble regions based on their shared contourstrength. This merging process is adaptive to complete contours of salient objects that can then be used for global perceptual analysis, e.g., foreground/ground separation. Such contour completion is enhanced by graph-based spectral decomposition. We show that even though simple region saliency measurements are adopted for each region, encouraging performance can be obtained after across-level integration. Experiments by comparing with 13 existing methods on three benchmark datasets including MSRA-1000, SOD and SED show the proposed method results in uniform object enhancement and achieves state-of-the-art performance.
  •  
4.
  • Gomariz, Alvaro, et al. (författare)
  • Unsupervised Domain Adaptation with Contrastive Learning for OCT Segmentation
  • 2022
  • Ingår i: Medical Image Computing and Computer Assisted Intervention, MICCAI 2022, pt viii. - Cham : Springer Nature. - 9783031164521 - 9783031164514 ; , s. 351-361
  • Konferensbidrag (refereegranskat)abstract
    • Accurate segmentation of retinal fluids in 3D Optical Coherence Tomography images is key for diagnosis and personalized treatment of eye diseases. While deep learning has been successful at this task, trained supervised models often fail for images that do not resemble labeled examples, e.g. for images acquired using different devices. We hereby propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains. We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D. In addition, we propose channel-wise aggregation as an alternative to conventional spatial-pooling aggregation for contrastive feature map projection. We evaluate our methods for domain adaptation from a (labeled) source domain to an (unlabeled) target domain, each containing images acquired with different acquisition devices. In the target domain, our method achieves a Dice coefficient 13.8% higher than SimCLR (a state-of-the-art contrastive framework), and leads to results comparable to an upper bound with supervised training in that domain. In the source domain, our model also improves the results by 5.4% Dice, by successfully leveraging information from many unlabeled images.
  •  
5.
  • Li, Shanghua, et al. (författare)
  • Fabrication of transparent polymer-inorganic hybrid material
  • 2005
  • Ingår i: Materials Research Society Symposium Proceedings. - : Springer Science and Business Media LLC. - 9781558998308 ; , s. 190-194
  • Konferensbidrag (refereegranskat)abstract
    • Polymer-inorganic hybrid materials composed of polymethyl methacrylate (PMMA) and zinc compounds were prepared by sol-gel in-situ transition polymerization of zinc complex in PMMA matrix. Zinc acetate dihydrate dissolved in ethanol was used as the inorganic precursor. Monoethanolamine (MEA) acted as a complexing agent to control the hydrolysis of zinc acetate to produce a zinc compound network, and then PMMA, formed in-situ through a radical polymerization, were chemically bonded to the forming zinc compound network to realize a hybrid material. Transparent homogenous hybrid materials with slight colours from pink to yellow were fabricated by varying the composition. TEM, FT-IR were employed to investigate structural and physical properties. The UV-shielding effect was evaluated by UV-VIS. The low content of zinc (around 0.02 wt%) and the fine particle size rendered it visibly transparent and capable of greatly attenuating UV radiation in the full UV range.
  •  
6.
  • Li, Yun, et al. (författare)
  • A Scalable Coding Approach for High Quality Depth Image Compression
  • 2012
  • Ingår i: 3DTV-Conference. - : IEEE conference proceedings. - 9781467349031 ; , s. Art. no. 6365469-
  • Konferensbidrag (refereegranskat)abstract
    • The distortion by using traditional video encoders (e.g. H.264) on the depth discontinuity can introduce disturbing effects on the synthesized view. The proposed scheme aims at preserving the most significantdepth transition for a better view synthesis. Furthermore, it has a scalable structure. The scheme extracts edge contours from a depth image and represents them by chain code. The chain code and the sampleddepth values on each side of the edge contour are encoded by differential and arithmetic coding. The depthimage is reconstructed by diffusion of edge samples and uniform sub-samples from the low quality depthimage. At low bit rates, the proposed scheme outperforms HEVC intra at the edges in the synthesized views, which correspond to the significant discontinuities in the depth image. The overall quality is also better with the proposed scheme at low bit rates for contents with distinct depth transition. © 2012 IEEE.
  •  
7.
  • Li, Yun, et al. (författare)
  • Coding of plenoptic images by using a sparse set and disparities
  • 2015
  • Ingår i: Proceedings - IEEE International Conference on Multimedia and Expo. - : IEEE conference proceedings. - 9781479970827 ; , s. -Art. no. 7177510
  • Konferensbidrag (refereegranskat)abstract
    • A focused plenoptic camera not only captures the spatial information of a scene but also the angular information. The capturing results in a plenoptic image consisting of multiple microlens images and with a large resolution. In addition, the microlens images are similar to their neighbors. Therefore, an efficient compression method that utilizes this pattern of similarity can reduce coding bit rate and further facilitate the usage of the images. In this paper, we propose an approach for coding of focused plenoptic images by using a representation, which consists of a sparse plenoptic image set and disparities. Based on this representation, a reconstruction method by using interpolation and inpainting is devised to reconstruct the original plenoptic image. As a consequence, instead of coding the original image directly, we encode the sparse image set plus the disparity maps and use the reconstructed image as a prediction reference to encode the original image. The results show that the proposed scheme performs better than HEVC intra with more than 5 dB PSNR or over 60 percent bit rate reduction.
  •  
8.
  • Li, Yun, et al. (författare)
  • Compression of Unfocused Plenoptic Images using a Displacement Intra prediction
  • 2016
  • Ingår i: 2016 IEEE International Conference on Multimedia and Expo Workshop, ICMEW 2016. - : IEEE Signal Processing Society. - 9781509015528
  • Konferensbidrag (refereegranskat)abstract
    • Plenoptic images are one type of light field contents produced by using a combination of a conventional camera and an additional optical component in the form of microlens arrays, which are positioned in front of the image sensor surface. This camera setup can capture a sub-sampling of the light field with high spatial fidelity over a small range, and with a more coarsely sampled angle range. The earliest applications that leverage on the plenoptic image content is image refocusing, non-linear distribution of out-of-focus areas, SNR vs. resolution trade-offs, and 3D-image creation. All functionalities are provided by using post-processing methods. In this work, we evaluate a compression method that we previously proposed for a different type of plenoptic image (focused or plenoptic camera 2.0 contents) than the unfocused or plenoptic camera 1.0 that is used in this Grand Challenge. The method is an extension of the state-of-the-art video compression standard HEVC where we have brought the capability of bi-directional inter-frame prediction into the spatial prediction. The method is evaluated according to the scheme set out by the Grand Challenge, and the results show a high compression efficiency compared with JPEG, i.e., up to 6 dB improvements for the tested images.
  •  
9.
  • Li, Yun, et al. (författare)
  • Depth Image Post-processing Method by Diffusion
  • 2013
  • Ingår i: Proceedings of SPIE-The International Society for Optical Engineering. - : SPIE - International Society for Optical Engineering. - 9780819494238 ; , s. Art. no. 865003-
  • Konferensbidrag (refereegranskat)abstract
    • Multi-view three-dimensional television relies on view synthesis to reduce the number of views being transmitted.  Arbitrary views can be synthesized by utilizing corresponding depth images with textures. The depth images obtained from stereo pairs or range cameras may contain erroneous values, which entail artifacts in a rendered view. Post-processing of the data may then be utilized to enhance the depth image with the purpose to reach a better quality of synthesized views. We propose a Partial Differential Equation (PDE)-based interpolation method for a reconstruction of the smooth areas in depth images, while preserving significant edges. We modeled the depth image by adjusting thresholds for edge detection and a uniform sparse sampling factor followed by the second order PDE interpolation. The objective results show that a depth image processed by the proposed method can achieve a better quality of synthesized views than the original depth image. Visual inspection confirmed the results.
  •  
10.
  • Li, Yun, et al. (författare)
  • Depth Map Compression with Diffusion Modes in 3D-HEVC
  • 2013
  • Ingår i: MMEDIA 2013 - 5th International Conferences on Advances in Multimedia. - : International Academy, Research and Industry Association (IARIA). - 9781612082653 ; , s. 125-129
  • Konferensbidrag (refereegranskat)abstract
    • For three-dimensional television, multiple views can be generated by using the Multi-view Video plus Depth (MVD) format. The depth maps of this format can be compressed efficiently by the 3D extension of High Efficiency Video Coding (3D-HEVC), which has explored the correlations between its two components, texture and associated depth map. In this paper, we introduce two modes for depth map coding into HEVC, where the modes use diffusion. The framework for inter-component prediction of Depth Modeling Modes (DMM) is utilized for the proposed modes. They detect edges from textures and then diffuse an entire block from known adjacent blocks by using Laplace equation constrained by the detected edges. The experimental results show that depth maps can be compressed more efficiently with the proposed diffusion modes, where the bit rate saving can reach 1.25 percentage of the total depth bit rate with a constant quality of synthesized views.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy