SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Liu Yun) ;mspu:(conferencepaper)"

Sökning: WFRF:(Liu Yun) > Konferensbidrag

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Fujiwara, Takanori, et al. (författare)
  • Feature Learning for Nonlinear Dimensionality Reduction toward Maximal Extraction of Hidden Patterns
  • 2023
  • Ingår i: 2023 IEEE 16TH PACIFIC VISUALIZATION SYMPOSIUM, PACIFICVIS. - : IEEE COMPUTER SOC. - 9798350321241 - 9798350321258 ; , s. 122-131
  • Konferensbidrag (refereegranskat)abstract
    • Dimensionality reduction (DR) plays a vital role in the visual analysis of high-dimensional data. One main aim of DR is to reveal hidden patterns that lie on intrinsic low-dimensional manifolds. However, DR often overlooks important patterns when the manifolds are distorted or masked by certain influential data attributes. This paper presents a feature learning framework, FEALM, designed to generate a set of optimized data projections for nonlinear DR in order to capture important patterns in the hidden manifolds. These projections produce maximally different nearest-neighbor graphs so that resultant DR outcomes are significantly different. To achieve such a capability, we design an optimization algorithm as well as introduce a new graph dissimilarity measure, named neighbor-shape dissimilarity. Additionally, we develop interactive visualizations to assist comparison of obtained DR results and interpretation of each DR result. We demonstrate FEALMs effectiveness through experiments and case studies using synthetic and real-world datasets.
  •  
2.
  • Hu, Xiaolong, et al. (författare)
  • Timing properties of superconducting nanowire single-photon detectors
  • 2019
  • Ingår i: Quantum Optics and Photon Counting 2019. - : SPIE - International Society for Optical Engineering. - 9781510627215
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we review theoretical and experimental research progress on timing properties of superconducting nanowire single-photon detectors, including six possible mechanisms that induce timing jitter and experiments towards ultra-low timing jitter.
  •  
3.
  • Kristan, Matej, et al. (författare)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • Ingår i: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
4.
  • Liu, Yun, et al. (författare)
  • DEL : Deep embedding learning for efficient image segmentation
  • 2018
  • Ingår i: Proceedings Of The Twenty-Seventh International Joint Conference On Artificial Intelligence. - California : International Joint Conferences on Artificial Intelligence. ; , s. 864-870
  • Konferensbidrag (refereegranskat)abstract
    • Image segmentation has been explored for many years and still remains a crucial vision problem. Some efficient or accurate segmentation algorithms have been widely used in many vision applications. However, it is difficult to design a both efficient and accurate image segmenter. In this paper, we propose a novel method called DEL (deep embedding learning) which can efficiently transform superpixels into image segmentation. Starting with the SLIC superpixels, we train a fully convolutional network to learn the feature embedding space for each superpixel. The learned feature embedding corresponds to a similarity measure that measures the similarity between two adjacent superpixels. With the deep similarities, we can directly merge the superpixels into large segments. The evaluation results on BSDS500 and PASCAL Context demonstrate that our approach achieves a good tradeoff between efficiency and effectiveness. Specifically, our DEL algorithm can achieve comparable segments when compared with MCG but is much faster than it, i.e. 11.4fps vs. 0.07fps. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy